Sunday, 31 March 2013

ac.commutative algebra - Example of inclusion which is not a finite morphism

Every closed immersion is a finite morphism. Can you give an example of quasi-projective varieties $Xsubset Y$ such that inclusion $Xhookrightarrow Y$ is not finite? Same with Y projective?



Thanks!



Edit: Sorry this question is very simple, I made a mistake asking the question. For a corrected version, check out this one.

Saturday, 30 March 2013

rt.representation theory - Inverting the Weyl Character Formula

Thanks Bruce Westbury for reminding me that I wrote something about this in my younger days. Here's what I make of it today, which provides a "closed formula" of sorts, a bit along the line of Allen Knutson's answer.



I'll deviate from the OP in writing $Lambda$ for the weight lattice, $Lambda^+$ for the set of dominant weights, and $X^lambda$ for the basis elements of $mathbf{Z}[Lambda]$ with $lambdainLambda$, I'll use the dot-action $wcdotlambda=w(rho+lambda)-rho$ and a related operator $J:mathbf{Z}[Lambda]tomathbf{Z}[Lambda]$ sending $Pmapstosum_{win W}(-1)^{l(w)}w(X^rho P)X^{-rho}$ in general, and in particular $X^lambdamapstosum_{win W}(-1)^{l(w)}X^{wcdotlambda}$. Weyl's character formula says that the character $chi_lambda$ of $V_lambda$ satisfies
$$
chi_lambda.J(1)=J(X^lambda)quadtext{for all $lambdainLambda^+$}
$$
The left hand side is in fact also equal to $J(chi_lambda)$, since for any $Pinmathbf{Z}[Lambda]^W$ one has
$$
J(P)=sum_{win W}(-1)^{l(w)}w(X^rho P)X^{-rho}
= P sum_{win W}(-1)^{l(w)}w(X^rho)X^{-rho} = P.J(1),
$$
the second equality by $W$-invariance of $P$. Thus $chi_lambda$ and $X^lambda$ have the same image by $J$.



Every $Pinmathbf{Z}[Lambda]$ is equivalent modulo $ker(J)$ to a unique $P'inmathbf{Z}[Lambda^+]$, i.e., a polynomial supported on the dominant weights. Concretely, define a $mathbf{Z}$-linear operator $alpha:mathbf{Z}[Lambda]tomathbf{Z}[Lambda^+]$ by $alpha(X^mu)=0$ if $X^muinker(J)$, which happens if $rcdotmu=mu$ for some reflection $rin W$, and otherwise $alpha(X^mu)=(-1)^{l(w)}X^{wcdotmu}$ where $win W$ is the unique element with $wcdotmuinLambda^+$. Then $J(P)=J(alpha(P))$ for all $Pinmathbf{Z}[Lambda]$. It follows from the above that $alpha(chi_lambda)=X^lambda$ for all $lambdainLambda^+$. In other words if $chi:mathbf{Z}[Lambda^+]tomathbf{Z}[Lambda]$ is the "character" map that linearly extends $X^lambdamapstochi_lambda$, then $alpha$ restricted to $mathbf{Z}[Lambda]^W$ defines the inverse "decomposition" map.



Now the decomposition of an orbit sum $m_lambda=sum_{muin W(lambda)}X^mu$ is given by $alpha(m_lambda)=sum_{muin W(lambda)}alpha(X^mu)$; since each $mu$ gives at most one term, this shows that its decomposition involves at most as many irreducible factors as the size the $W$-orbit of $lambda$.



If $lambda$ is very large, it may be convenient to write $m_lambda=frac1ssum_{win W}X^{w^{-1}(lambda)}$ where $s$ is the size of the stabiliser in $W$ of $lambda$, and using $alpha(X^mu)=alpha((-1)^{l(w)}X^{wcdotmu})$ for any $mu,w$, obtain
$$
alpha(m_lambda)
=frac1salphaleft(sum_{win W}(-1)^{l(w)}X^{wcdot(w^{-1}(lambda))}right)
=frac1salpha(X^lambda J(1)).
$$
If $lambda$ is strictly dominant one has $s=1$, and if moreover $lambda$ is far enough off the walls that $X^lambda J(1)$ is entirely supported on dominant weights, then the right hand side simply becomes $X^lambda J(1)$, which has $|W|$ distinct terms.
This coincides with the expression that Allen Knutson guessed. However the requirement is rather stronger than he suggested: in type $A_n$ for instance, dominant weights can be represented by weakly decreasing $n+1$-tuples of integers, with $rho=(n,n-1,ldots,1,0)$, and the "off the walls" condition means that the successive entries for $lambda$ decrease by at least $n+1$. In other words, in this case the simplified formula holds only if $lambda-(n+1)rho$ is dominant.

cosmology - What is the furthest object in the observable universe?

About 13.1 billion light years.



The z8 GND 5296 Galaxy mentioned in the article linked by TildalWave, mere 700 millions years old (after Big Bang) would be the mentioned distance from us for the light to reach us at indicated speed, considering the 13.798 bln years old universe. As we push technological barriers, we keep finding more distant objects.



The age of the universe is the hard limit in light years of distance of any observable object - there was nothing to emit light more than 13.798 bln years ago, or respectively, visible from more than 13.798 bln light years away. visible from distance that what was 13.798 bln light years 13.798 bln years ago - or, after adjusting for universe expansion over that time, currently 46.6 billion light years away.



Of course, as RhysW mentions, this limit changes over time as the universe ages and grows. The exact rate of growth is 74.2km/sec per megaparsec, or about 0.000000007% per year. I'd say, comparing to our life and current technological progress, an insignificant factor.

Friday, 29 March 2013

at.algebraic topology - altering curvature on a tessellation representation of a compact surface

It is always easier to think about the spherical case first. You can see that all round, two-dimensional spheres, regardless of radius, admit tessellations. So constant curvature $+1$ is not necessary. On the other hand, constant positive curvature should be a requirement. (In fact, it is fair to require that the metric be homogeneous and isotropic. Otherwise what does it mean for tiles of the tessellation to be identical?)



The case of constant negative curvature is the same. The choice of constant doesn't really matter, so you might as well use $-1$. For an elementary discussion, with many beautiful pictures, see "Noneuclidean tesselations and their groups", by Wilhelm Magnus. A more modern treatment, also with wonderful graphics is "Indra's Pearls" by Mumford, Series, and Wright.

Thursday, 28 March 2013

books - Undergraduate roadmap to algebraic geometry?

There is an excellent book on algebraic geometry entitled Algebraic Geometry: A First Course by Joe Harris. This book, however, emphasizes the classical roots of the subject but if you have not yet seen too much of algebraic geometry, it is worthwhile getting this book and reading a few lectures. (The book is split into "lectures" rather than "chapters".) There are many beautiful constructions in classical algebraic geometry that can be understood without too much background (and which lay the foundations for some aspects of modern algebraic geometry) and this can perhaps give you a rough indication of the geometric intuitions in algebraic geometry. And in my opinion, the book does an excellent job of conveying the beauty and elegance of algebraic geometry.



The prerequisites for reading this book (according to Harris) are: linear algebra, multilinear algebra and modern algebra. However, since this is a "Graduate Texts in Mathematics" book, there are some places where it is very helpful (but not essential to the point that you cannot read the book otherwise) to have a basic knowledge of commutative algebra, complex analysis and point-set topology. (E.g., basic facts about topological spaces, local rings, basic constructions in commutative algebra, holomorphic functions etc.) Atiyah and Macdonald's an An Introduction to Commutative Algebra should furnish more than enough preparation. (You can also concurrently read commutative algebra if that is your preference.)



Since you are an undergraduate student, you should not worry too much about learning "background material" just yet before at least seeing what classical algebraic geometry is about. If at some point you decide to specialize in the subject, you will need to learn the "modern tools" such as, for example, schemes, sheaves and sheaf cohomology. The "classic book" for this is Robin Hartshorne's Algebraic Geometry but since that does require a solid background in commutative algebra (or at least the mathematical maturity to accept facts without proofs), you might want to try other books. (But this is, I hasten to add, an excellent book if you do have the background to understand it.)



As Bcnrd (on MathOverflow) recommended to me, Qing Liu's Algebraic Geometry and Arithmetic Curves seems to be an excellent book on the subject. Most of the background material in commutative algebra is developed from scratch, and the first six chapters furnish a good introduction to the "modern tools". The last three chapters focus more on the arithmetic side of algebraic geometry, but you can always omit that if you so desire. (But if you are interested in number theory, definitely take a look at that!)



Succinctly, I recommend: Take a look at Atiyah and Macdonald and at least read the first few chapters. (The book is roughly 120 pages so covering the first few chapters is not too hard. Though be warned: Some people say that Atiyah and Macdonald is "dense", but I personally found it a very readable book and I think the majority find that so as well.) Then you should have the right background to read Harris and I hope that that will show you how fascinating the subject of algebraic geometry is. Good luck!

Wednesday, 27 March 2013

soft question - Famous mathematical quotes

"There are, therefore, no longer some problems solved and others unsolved, there are only problems more or less solved, according as this is accomplished by a series of more or less rapid convergence or regulated by a more or less harmonious law. Nevertheless an imperfect solution may happen to lead us towards a better one."



Henri Poincare

gr.group theory - Estimate for the order of the outer automorphism group of a finite simple group

I recommending listing all the possibilities. A number of sources already do this, so it is fairly easy to do again; the table in the ATLAS is quite reasonable. Basically, the outer automorphism group is ridiculously small "most" of the time, so you might care about the details.



I think you'll get |Out(G)| ≤ C*log(|G|) as worst case, but this is pretty pessimistic most of the time.



The outer automorphism group of an alternating group has order at most 4, and almost always has order 2. There are finitely many sporadic groups, and so will not matter asymptotically, but you can quickly check over the list to see they have outs of size at most 2.



The groups of Lie type have a 3-part outer-automorphism group; the diagonal, the field, and the diagram parts. The diagram part has order at most 6 (and only for D4). The field part is cyclic, but can be "large", as in, if "q" of your group is p^f, then it is cyclic of order f. The diagonal part is usually small (order at most 4, or even order at most 2), but can be larger for PSL(n,q) and PSU(n,q). Even there it is cyclic of order at most n.



So basically you handle the case of PSL/PSU a little more carefully, then the case of a general group of Lie type using bounds of 4 and 6 for diagonal and diagram so getting something like O(log(|G|)), then you handle the rest which are bounded by a constant.

Putting mass-luminosity relation and Hertzsprung-Russel diagrams together leads us to a mass-age relation; so how do stars lose their mass over time?

I think that the title is completely clear, but here's an expansion:



I was just reading about Mass-luminosity relation that says massive stars are more luminous than tiny ones. Well, let's talk about main-sequence stars for now. This relation becomes interesting when it's mixed with the Hertzsprung-Russel diagram that says young stars are more luminous than older ones (Originally says that hotter stars are more luminous and we know that hotter stars are younger).



So mixing these two relations produces some kind of Mass-age relation which says young stars are more massive, that is a star loses its mass over time, right? (Please tell me if I'm going wrong)



If this Mass-age relation is right, how do stars lose this extra mass during their evolution? And where does this stray matter go?

Tuesday, 26 March 2013

linear algebra - The middle eigenvalues of an undirected graph

If you assume the graph is bipartite (as Cam suggested) and $3-$regular, then I believe the middle eigenvalues are always at most $sqrt{2}$ in absolute value, with the Heawood graph being the only connected graph with equality.



We know that for any positive integer $k$ the number of closed walks on your graph of length $k$ is equal to $Tr(A^k)=sum_i lambda_i^k$. In the case $k=2$, this becomes
$$sum_{i=1}^n lambda_i^2 = 2 |E| = 3 n,$$
while for $k=4$ we have
$$sum_{i=1}^n lambda_i^4 geq 15 n,$$
since there are $9n$ closed walks of the form $x rightarrow y rightarrow x rightarrow z rightarrow x$ (here $y$ possibly can equal $z$), and $6n$ walks of the form $x rightarrow y rightarrow z rightarrow y rightarrow x$ where $z neq x$.



Now suppose that $lambda_i^2 in [t, 9]$ for every $i$. It follows from convexity that, subject to the constraint $sum lambda_i^2=3n$, the sum of $lambda_i^4$ is maximized when every $lambda_i^2$ is either $t$ or $9$, that is to say
$$sum_{i=1}^n lambda_i^4 leq (frac{3-t}{9-t} n) (81) + (frac{6}{9-t} n) (t^2)=(27-6t)n.$$



Comparing with our lower bound, we see $t leq 2$. Equality can only hold when there are exactly $n/7$ eigenvalues equal to $3$ in absolute value, and the rest equal to $sqrt{2}$ in absolute value. This is the spectrum of $n/14$ disjoint copies of the Heawood graph, which is uniquely determined by its spectrum.



It feels like there should be a way of saying that large connected graphs have an eigenvalue much smaller than this, but I don't see how to modify this method to show that (I'm not sure how the connectedness of the graph would show up in path counting). If your graph has many $4$ cycles, you can include them in the $k=4$ lower bound to get a better bound on $t$.

soft question - How can I conclude that I live in a solar system?

I find the parallax effect
parallax
effect especially convincing evidence. Parallax is the shifting of lines of sight
due to translation, eg by waiting half an earth year at which point
theory tells we have moved about 16 light minutes around the sun from where we were.



Regarding retrograde motions: as Ilya said,
Kepler's 2nd law closer
planets move faster. Now draw two circles
centered at 'Sun' with a point 'Earth' moving on the inner circle
and another point 'Mars' moving more slowly but in the same sense, say counterclockwise
on the outer circle. Drow a line between the two moving points. That line
indicates how Mars looks, viewed from earth, relative to the distant stars.
How does the line move? Put the Sun at the origin. If the order is Sun-Earth-Mars,
with Earth and Mars on the positive
x axis, then the slope of the line is decreasing. But put the order Earth-Sun-Mars
with Earth on the negative x axis, Mars on the positive x -axis. The slope of said line is now increasing. One is `prograde' the other 'retrograde'.



Finally, the explanation of elliptical versus circular motion
is more of an Occam's razor business. Originally we had
Ptolemy's ''epicycles''-- in essence Fourier series. Ptolemy had earth at the
solar system center and each planet moving on a system of nested circles,
as in $z(t) = r_1 e ^{i omega_1 t} + r_2 e^{i omega_2 t} + r_3 e^{i omega_3 t} + ... $, $r_1 > r_2 > ldots $. Ptolemy needed 20 to 30 circles to account for observations.
Kepler realized that but putting the sun at the 'center' and having the planets move in slightly eccentric ellipses with focus,
a bit off from the sun, sweeping out ''equal areas in equal times'', he could account for all of Ptolemy's data plus Brahe's much more detailed data.

the moon - Visibility of Apollo 11 module

I wasn't around when man landed on the moon in 1969. When I see the moon, I always wonder, were people able to see the rocket?



Yesterday, I looked at the moon in daylight and wondered again.



My question is: how far were people able to see the rocket after it launched?

Monday, 25 March 2013

During an eclipse, does the size of the moon and sun match perfectly?

To expand on what taupunkt commented, on average, according to the article Solar Eclipse: What is a Total Solar Eclipse & When is the Next One? (Rao, 2014), it is a unique quirk of nature that




The sun's 864,000-mile diameter is fully 400 times greater than that of our puny moon, which measures just 2,160 miles. But the moon also happens to be about 400 times closer to the sun than the Earth




This is often stated in middle and high school science textbooks.



However, the orbits of the Earth around the sun and the moon around the Earth are elliptical, as illustrated in the diagram below:



enter image description here



Image source: NOAA



This results in the exact ratio mentioned above not occurring resulting in an annular eclipse, where the moon is too 'small' to fully cover the solar disc.



A comparison between a total and annular eclipse is shown below:



enter image description here



Image source: ScienceJedi



NASA's Eclipse data page, in particular, their Eclipses and moon orbit information page discuss in detail (with considerable data), the




interaction and harmonics of the synodic, anomalistic, and draconic months not only determine how frequently eclipses occur, but they also control the geometric characteristics and classification of each eclipse.




Within the NASA link, data about how much of the sun's disk that would be obscured by the moon has been calculated (from observations and predicted). For example, using the data for solar eclipses from 2011 to 2020, The average fraction of the sun's disk that is and is expected to be obscured (this is referred to as 'Eclipse magnitude') in an annular eclipse varies from about 0.95-0.99 (95% to 99%) for that time period.



A catalog of eclipses up to the year 3000 is provide on this NASA page.



The greatest difference would occur if the moon is at its furthest from the Earth (apogee) and the Earth closest to the sun (perihelion).

ag.algebraic geometry - Various Cartan's Lemmata

To expand on Mariano's answer, the real point is that Élie Cartan did an immense amount of work across many areas of mathematics. He was one of the best mathematicians of the early modern period, by which I mean the period of Hilbert. Everyone knows that Hilbert was the greatest or one of the greatest mathematicians of this era. Poincaré was only a little earlier and many people would put him first or second. But beyond these two, if you made a list of the ten greatest mathematicians born between 1850 and 1900, I think that it would be hard not to include Cartan. So asking what this Cartan lemma has to do with that Cartan's lemma is a little like asking what Eulerian circuits have to do with the Euler-Lagrange equation.



To give two examples, the Cartan matrix is named after Cartan because he completed the classification of complex simple Lie algebras. (The page says without citation that the Cartan matrix is actually due to Killing, but that is a side point, because Killing's work was reworked and extended by Cartan.) On the other hand, the Cartan-Hadamard theorem says that a complete, connected, simply connected manifold with non-positive sectional curvature is diffeomorphic to $mathbb{R}^n$. Cartan is the "C" in CAT(0) and CAT(-1) spaces for this reason. Cartan matrices have no direct connection to CAT(0) spaces. (Indirectly speaking, they are both related to symmetric spaces, but lots of things are indirectly related.)



Cartan also invented the differential graded algebra of differential forms on a manifold. It is not named after Cartan, but it certainly could have been, and it is one of the best definitions of the era.



Élie Cartan's son Henri Cartan was also a great mathematician. As well as some great papers, that Cartan did a lot of collaborative work, including Bourbaki and the Cartan seminar. A lot of things are also named after him, for instance the Cartan-Eilenberg resolution.



I can't access the Cartan lemma in Freitag and Kiehl. I see no direct connection between your Cartan lemma in complex analysis, which is a lower bound on the norm of the value of a complex polynomial at a non-root; and your Cartan lemma on decompositions in normed rings. Addendum: Thanks to Mohan's answer, I now know that the bound for complex polynomials is due to Henri Cartan in 1928. It looks like the Cartan lemma on normed rings is also due to Henri Cartan in 1940, and was originally applied in complex analysis even though it is a result in ring theory. So maybe my historical review is not all that well informed for the specific question. And, although I still don't have access to Freitag and Kiehl, the result there could be a Cartan's Lemma in sheaf cohomology which is also due to Henri Cartan. So maybe my first paragraph playing up Élie Cartan in the first paragraph is not all that well informed for this question, although he is also certainly responsible for various "Cartan's Lemmas".

rt.representation theory - Are Dynkin diagrams of some universal construction?

This is a general question. The classification of semisimple Lie algebras by using Dynkin diagrams has always amazed me. And these A,B,C,D,E,F,G diagrams seem to appear quite often in the realm of representation theory (of all kinds of things! Lie algebra, Lie group, quivers,etc).
My question is (vaguely put), WHY are these diagrams useful? Are they of some more universal (thus more imaginable, more trivial) constructions? They always seem very mysterious for me.

black holes - How possible is it that the Large Hadron Collider (LHC) on its second try, could disrupt the gravity of Earth?

It cannot disrupt the gravity of the Earth. There are other infinitesimal possibilities of doomsday scenarios though these stories have been debunked many times.




The report ruled out any doomsday scenario at the LHC, noting that the physical conditions and collision events which exist in the LHC, RHIC and other experiments occur naturally and routinely in the universe without hazardous consequences,[3] including ultra-high-energy cosmic rays observed to impact Earth with energies far higher than those in any man-made collider.


gt.geometric topology - Orbifold fundamental group in terms of loops?

There is also a natural interpretation of the orbifold fundamental group in terms of loops, using an extended version of a Wirtinger presentation. Let's start out with the closed case and mention the cusped case at the end.



As a warm-up, consider an orbifold that has singular locus a link with underlying space $S^3$. To compute the fundamental group of the orbifold can be computed first from the Wirtinger presentation of the link and then by introducing torsion relations for each meridian for example if the cone angle along the link is $2pi/3$ everywhere, then each meridian $mu_i$ should have the relation $mu_i^3$. A loop around a link component does not bound a (smooth immersed) disk in the orbifold, instead it bounds a disk with a cone point of order 3. However $mu_i^3$ does bound a disk in the orbifold, and is trivial.



To make this interpretation general, we need to consider 3-orbifolds more generally. A geometric 3-orbifold is really a trivalent graph with edges decorated by torsion (or cone angles, depending on taste) embedded in a 3-manifold, and so the underlying manifold, the embedding, and the trivalent points need to be accounted for.



In reverse order, the trivalent points are introduce relations abc=1 (compare to finite subgroups of SO(3) which are actually the isotropy subgroups that fix these types of points). The next two conditions really have to be considered together. Really what needs to be computed is a Wirtinger presentation of the complement of a trivalent graph in the underlying space. Then quotient out by the torsion relations and relations coming from the trivalent points.



For cusped (geometric) manifolds, there are extra ways to decorate the graph, namely there can be trivalent points where the orders of torsion along edges incident to the these vertices are (2,4,4), (2,3,6) or (3,3,3) and there could be 4-valent points where the torsion orders of the edges are (2,2,2,2).



It should be pointed out that such a computation can also be done by Damian Heard's ORB, which is extremely useful if a large example needs to be considered.

galactic dynamics - Questions about spiral galaxy arms

Actually, the stars and nebulae that make up the spiral arm are only temporarily part of that spiral arm. Spiral arms are more like sound waves where individual particles move around a more or less stationary position. (Look for instance at the animation of longitudinal waves from Dan Russel, the red dots move a bit to the left and to the right around a stationary position). Dust, gas and stars move towards or away from another just as longitudinal waves. Where the dust, gas and stars come close together (and where, therefore, the density increases), spiral arms can be seen as more stars are close together increasing the brightness at that position in the galaxy.



This effect is, furthermore, much increased because the increased density of dust and gas in the spiral arm causes protostars to form. The brightest stars burn up their energy so fast that they will cease to exist even before the longitudinal wave (the spiral arm) has passed. These very bright stars only exist for a small portion of their orbital period around the galaxy's centre, and only while they are in the spiral arm. The large majority of stars exist much longer, but are also much dimmer and contribute only little to the overal brightness of the galaxy.



This causes the spiral arms to be so much brighter than the rest of the disk, where also a lot of stars exist. But these can hardly be seen as they are much dimmer.



Of course, the stars do not revolve around a stable position in the galaxy (as the red dots in the wave animation) but follow their own orbits around the centre of the galaxy. Sometimes a bit faster, and sometimes a bit slower depending on the position relative to the spiral arms.



Because the spiral arms are waves, it does not matter that stars near the centre move faster than the stars at the edge. It just means that they will be part of the spiral arm for a shorter period of time.

Sunday, 24 March 2013

nt.number theory - Consecutive integers with many prime factors

This question is inspired by Project Euler's Problem 47.



Let m and n be positive integers. Consider the following four (related) statements:



  1. There exists m consecutive integers, each of which has at least n distinct prime factors.

  2. There exists m consecutive integers, each of which has precisely n distinct prime factors.

  3. There exists m consecutive integers, each of which has at least n prime factors (counted with multiplicity)

  4. There exists m consecutive integers, each of which has precisely n prime factors (counted with multiplicity).

Question $i$: For which pairs $(m,n)$ is statement $i$ true?



For $n = 2$ and any $m$, statement 3 is obviously true (it is just the elementary fact that there are prime gaps of arbitrary length). I suppose it is not too hard to avoid prime powers, so that statement 1. is also true for $n=2$ and arbitrary $m$ (although at the moment I cannot see if this follows by an easy modification of the usual $(m+1)!+2, ldots, (m+1)!+m+1$ argument). Are there other elementary cases? In particular, is there an easy proof that Problem 47 (which is to find the first instance of Statement 1 for $m=n=4$) is actually solvable? The actual answer is small enough that it is found rather quickly by a naive brute force approach, but it would be nice to know in advance that there is an answer (perhaps even having an upper bound, so that one also knows that one may find the answer before the heat death of the universe).



Statement $i$ can be strengthened by requiring the existence of infinitely many $m$-tuples; call this Statement $i'$.



Statements 2 and 4 are probably too hard to say anything about in general (but maybe not?), but for $m=2$, Statements $2'$ and $4'$ are both weaker versions of this question.

Linear Algebra Texts?

Hands-down, my favorite text is Hoffman and Kunze's Linear Algebra. Chapter 1 is a review of matrices. From then on, everything is integrated. The abstract definition of a vector space is introduced in chapter 2 with a review of field theory. Chapter 3 is all about abstract linear transformations as well as the representation of such transformations as matrices. I'm not going to recount all of the chapters for you, but it seems to be exactly what you want. It's also very flexible for teaching a course. It includes sections on modules and derives the determinant both classically and using the exterior algebra. Normed spaces and inner product spaces are introduced in the second half of the book, and do not depend on some of the more "algebraic" sections (like those mentioned above on modules, tensors, and the exterior algebra).



From what I've been told, H&K has been the standard linear algebra text for the past 30 or so years, although universities have been phasing it out in recent years in favor of more "colorful" books with more emphasis on applications.



Edit: One last thing. I have not heard great things about Axler. While the book achieves its goals of avoiding bases and matrices for almost the entire book, I have heard that students who have taken a course modeled on Axler have a very hard time computing determinants and don't gain a sufficient level of competence with explicit computations using bases, which are also important. Based on your question, it seems like Axler's approach would have exactly the same problems you currently have, but going in the "opposite direciton", as it were.

Saturday, 23 March 2013

What is the current status of the function fields Langlands conjectures?

(1) Regarding the relationship between geometric Langlands and function field Langlands:
typically research in geometric Langlands takes place in the context of rather restricted ramification (everywhere unramified, or perhaps Iwahori level structure at a finite number of points). There are investigations in some circumstances involving wild ramification (which is roughly the same thing as higher than Iwahori level), but I believe that there is not a definitive program in this direction at this stage.



Also, Lafforgue's result was about constructing Galois reps. attached to automorphic forms. Given this, the other direction (from Galois reps. to automorphic forms), follows immediatly, via
converse theorems, the theory of local constants, and Grothendieck's theory of $L$-functions in the function field setting.



On the other hand, much work in the geometric Langlands setting is about going from local systems (the geometric incarnation of an everywhere unramified Galois rep.) to automorphic sheaves (the geometric incarnation of an automorphic Hecke eigenform) --- e.g. the work of Gaitsgory, Mirkovic, and Vilonen in the $GL_n$ setting does this. I don't know how much is
known in the geometric setting about going backwards, from automorphic sheaves to local systems.



(2) Regarding the status of function field Langlands in general: it is important, and open, other than in the $GL_n$ case of Lafforgue, and various other special cases. (As in the number field setting, there are many special cases known, but these are far from the general problem of functoriality. Langlands writes in the notes on his collected works that "I do not believe that much has yet been done beyond the group $GL(n)$''.) Langlands has initiated a program called ``Beyond endoscopy'' to approach the general question of functoriality. In the number field case, it seems to rely on unknown (and seemingly out of reach) problems of analytic number theory, but in the function field case there is some chance to approach these questions geometrically instead. This is a subject of ongoing research.

gt.geometric topology - Does a regular neighborhood always exist for a properly embedded surface in a 3-manifold?

Can someone please clarify if there always exist regular neighborhoods for a properly embedded surface in a 3-manifold? More precisely, if $F$ is a properly embedded surface in a 3-manifold $M$ and I give a simplicial complex structure to $M$, then will $F$ automatically recieve a subcomplex structure (after probably finitely many barycentric subdivisions of the triangulation of $M$) ? If this is so, then we can obviously construct a regular neighborhood of $F$. But I am feeling unsure about it now due to the following example where $F$ might be too big to allow such a compatible subdivision of the triangulation of $M$:



Example: Consider a mobius band without a contractible neighborhood in a genus 1 handlebody. One can construct one as follows: First take a solid cylinder $D^2times{[-1,1]}$ and look at the central strip. Now glue the ends of the cylinder with $180$ degrees twist to make it a genus 1 handlebody. This will glue the central strip with a twist to make it a mobius band and it has no small neighborhood in the ambient genus 1 handlebody. It seems to me that this mobius band will not have a subcomplex structure for any given simplicial complex structure on the genus 1 handlebody.



In case the answer to my question is yes, I would also like to ask if there are no issues regarding the orientability of $F$ and $M$ while considering regular neighborhoods. That is whether it matters for constructing such a neighborhood if $F$ is non-orientable but $M$ is orientable or vice-versa or other combinations.



I am unable to clarify this by looking at Hempel's book on 3-manifolds. It would be great if someone could elucidate this.

Thursday, 21 March 2013

A learning roadmap for algebraic geometry

FGA Explained. Articles by a bunch of people, most of them free online. You have Vistoli explaining what a Stack is, with Descent Theory, Nitsure constructing the Hilbert and Quot schemes, with interesting special cases examined by Fantechi and Goettsche, Illusie doing formal geometry and Kleiman talking about the Picard scheme.



For intersection theory, I second Fulton's book.



And for more on the Hilbert scheme (and Chow varieties, for that matter) I rather like the first chapter of Kollar's "Rational Curves on Algebraic Varieties", though he references a couple of theorems in Mumfords "Curves on Surfaces" to do the construction.



And on the "algebraic geometry sucks" part, I never hit it, but then I've been just grabbing things piecemeal for awhile and not worrying too much about getting a proper, thorough grounding in any bit of technical stuff until I really need it, and when I do anything, I always just fall back to focus on varieties over C to make sure I know what's going on.



EDIT: Forgot to mention, Gelfand, Kapranov, Zelevinsky "Discriminants, resultants and multidimensional determinants" covers a lot of ground, fairly concretely, including Chow varieties and some toric stuff, if I recall right (don't have it in front of me)

ag.algebraic geometry - non principally polarized complex abelian varieties

I've always meant to sit down and figure out some examples. OK, got it. I think the following works over any field (including finite fields and numbers fields) and so must be standard (unless I've overlooked something).



Let $E$ and $E'$ be non-isogenous elliptic curves over a field $k$ (i.e., no nonzero maps between them over $k$), and $G$ a group scheme of prime order $p$ over $k$ which occurs inside both $E$ and $E'$. Fix such embeddings. (e.g., $E$ and $E'$ over a number field with split $p$-torsion and $G = mathbf{Z}/pmathbf{Z}$ embedded in each.) Embed $G$ diagonally into $E times E'$, and let $A = (E times E')/G$.



I claim that any polarization of $A$ over $k$ has degree divisible by $p$. (Thus, this gives examples of abelian surfaces without principal polarization, and also in char. $p > 0$ examples with no separable polarization by using non-isogenous ordinary elliptic curves.) Suppose to the contrary, and let $phi:A rightarrow A^{rm{t}}$ denote the symmetric isogeny that "is" such a polarization (I'm never sure if we should call $phi$ the polarization, or be more symmetric and call $(1,phi)^{ast}(mathcal{P}_A)$
on $A times A^{rm{t}}$ the polarization), so ${rm{deg}}(phi) = d^2$ where $p$ doesn't divide $d$.



Due to the definition of $A$, the map $j:E rightarrow A$ induced via inclusion into the first factor of $E times E'$ followed by projection has trivial kernel and so is a closed subvariety. There is also the dual map $j^{rm{t}}:A^{rm{t}} rightarrow E^{rm{t}}$, and by general theorem of polarizations the composite map
$$f:E stackrel{j}{rightarrow} A stackrel{phi}{rightarrow} A^{rm{t}} stackrel{j^{rm{t}}}{rightarrow} E^{rm{t}}$$
is a symmetric isogeny that "is" a polarization of $E$. In particular, it must have square degree. We likewise get a symmetric isogeny $f':E' rightarrow {E'}^{rm{t}}$ that
"is" a polarization of $E'$.



Consider the quotient map $q:E times E' rightarrow A$ which is an isogeny of degree $p$. The dual map
$$q^{rm{t}}:A^{rm{t}} rightarrow (E times E')^{rm{t}} simeq E^{rm{t}} times
{E'}^{rm{t}}$$
is also an isogeny of the same degree, and the composite map
$$E times E' stackrel{q}{rightarrow} A stackrel{phi}{rightarrow} A^{rm{t}}
stackrel{q^{rm{t}}}{rightarrow} E^{rm{t}} times {E'}^{rm{t}}$$
must be a direct product of a pair of maps $E rightarrow E^{rm{t}}$
and $E' rightarrow {E'}^{rm{t}}$ since $E$ and $E'$ are assumed to not be $k$-isogenous.
These maps must respectively be $f$ and $f'$ as defined above. In particular, since the degree of $phi$ is not divisible by $p$ but the degrees of $q$ and $q^{rm{t}}$ are each equal to $p$, and moreover each of $f$ and $f'$ has square degree, we conclude that one of $f$ or $f'$ has degree not divisible by $p$ and the other has degree divisible by $p^2$. But an elliptic curve has a unique polarization of each square degree $m^2$, namely the composite of $[m]$ with the canonical autoduality (with the right sign to get the ampleness condition). In particular, the kernel is the $m$-torsion. It follows that one of $f$ or $f'$ has kernel with trivial $p$-part, and the other has kernel containing the entire $p$-torsion.



Now $E$ and $E'$ are each naturally abelian subvarieties of $A$, and $phi$ is an isomorphism on $p$-divisible groups (since degree is prime to $p$), so the $p$-parts of the respective kernels of $f$ and $f'$ must be where each meets the $p$-part of the kernel of $q^{rm{t}}$. For one of $f$ or $f'$, this is the entire $p$-torsion of the corresponding elliptic curve, and in particular contains $G$. But $E$ and $E'$ in $A$ meet in exactly $G$ (scheme-theoretically) due to the construction of $A$, whence $G$ sits in both kernels. But one of the kernels has trivial $p$-part. Contradiction.

interstellar travel - Speed comparison of both voyagers

If you notice carefully, you'll see that their speeds away from the sun are fairly similar (7-9 km/s), and the difference is only in their speeds away from the earth. This is because of a small component of earth's motion around the sun (which is at about 30 km/s) adding to Voyager 1's motion and subtracting from Voyager 2's motion.

Wednesday, 20 March 2013

set theory - How can an ultrapower of a model of ZFC be "ill-founded" yet still satisfy ZFC?

My understanding (please correct me if I'm wrong) is that if you have some transitive set M which is an $epsilon$-model of ZFC, and you take an ultrapower of it using an approprate ultrafilter, you wind up with a new model whose membership relation is not the $epsilon$ relation of the ambient set theory, but still satisfies ZFC. Furthermore, if the membership relation of the ultrapower is well-founded, one can always use the Mostowski collapse theorem to produce an isomorphic $epsilon$-model.



My question is this: how could one possibly end up with a model of ZFC which satisfies the axiom of regularity ("every set is disjoint from one of its members"), yet whose membership relation isn't well-founded?



I'm struggling to imagine this; the most I can come up with is that for no set in the infinite chain is its transitive closure also a set (of the model). But I'm skeptical about whether or not that can be the case, because it seems like you ought to be able to construct the transitive closure using definition by transfinite recursion, letting $f(0)$ be any set in the chain and $f(n+1)=bigcup f(n)$ (axiom of union). Then $f(omega)$ (axiom of infinity) ought to contain all the sets needed to build a contradiction to regularity.



Sorry if this question sounds like I'm arguing with myself. This has been bothering me for a few days now.

Tuesday, 19 March 2013

extra terrestrial - What are the current observational limits on the existence of Dyson spheres/swarms/rings?

Reading Dyson's original argument gives some useful information. He says that such a sphere would have a surface temperature of 200-300 Kelvin. That's a tiny fraction of the surface temperature of a star. Such a sphere would also have to be rather large - he says that it would have to have a circumference the size of Earth's orbit. So we would simply have to look for a massive, cool object radiating very little visible light yet curving space-time just as a star would. Not an easy job, but not impossible.



Besides a black hole, scientists don't know of any other naturally-occurring objects with similar properties. Neutron stars would be insignificantly small, as would other compact stellar remnants (large black holes notwithstanding, but even they could emit different types of radiation). So a Dyson sphere would look completely different when compared to all other objects we know.



Now, Dyson rings, swarms, etc. would be much different. It is possible that a Dyson device consisting of many smaller objects - such as a ring or swarm - could be mistaken for natural objects occurring in a stellar system, such as a dense protoplanetary disk. However, I'm not sure just how much light from the central star such a swarm or ring would let through (quite a lot, I imagine); this could make it easier to distinguish such a device from a typical protoplanetary disk.



Edit



I should add that I don't know of any current or planned experiments searching for Dyson spheres; these are simply methods that could be used.

Sunday, 17 March 2013

terminology - Term for a momentary geometric pattern formed by astronomical objects

Momentary events in general (comets, supernovae, solar flares, etc.) can be lumped under the general heading of "transient phenomena". 'Transient asterisms' may be a candidate term for the concept you are describing.



Then again, we now know that on a large enough time scale all asterisms are eventually transient in nature. So one might justifiably take the point of view that the adjective "transient" is redundant. So perhaps calling the patterns you describe simply as "asterisms" is not as bad of a candidate term as you might suspect.

gravity - Is there any way a meteor can hit at less than escape velocity?

Edited.
No if you talk about the escape velocity form Earth. This follows simply from the fact that energy $E$ is conserved. An object that is not gravitationally bound to Earth has $E>0$ and hence $v>v_{mathrm{esc}}$ when hitting ground.




Yes, if you meant the escape velocity from the Solar system, because the Earth moves with $v_{rm Earth}=v_{rm escape}/sqrt{2}$ relative to (but not towards) the Sun. Here $v_{rm escape}=sqrt{2GM_{odot}/1{rm AU}}$ is the local escape speed from the Sun, while $v_{rm Earth}=sqrt{GM_{odot}/1{rm AU}}$ is the speed of the local circular orbit. An object at 1AU form the Sun and bound to the Sun cannot have speed greater than $v_{rm escape}$.



Now, the impact speed of an object that moves at $v_{rm escape}$ can be as low $v_{rm escape}-v_{rm Earth}=v_{rm escape}(1-1/sqrt{2})$ if it hits Earth "from behind", i.e. moving in the same direction as Earth at the time of impact.



Note also that meterors typically move not faster than $v_{rm escape}$, for they don't come from outer space, but from the Solar system.

ag.algebraic geometry - Implement intersection products

It would help to know the context of your work better - what category are you working in?



I currently work computing arithmetic (Arakelov) intersection pairings on curves, for which I use magma, doing the actual calculations using either resultants or Grobner bases (at least at finite primes); all you are calculating are the lengths of modules, which is fairly standard for commutative algebra software I assume.

at.algebraic topology - Understanding the product in topological K-theory

I apologize that this is perhaps not adequate for mathoverflow but I have struggled with this for days now and become desperate...



The reduced K-group $tilde{K}(S^0)$ of the zero sphere is the ring $mathbb{Z}$ as being the kernel of the ring morphism $K(S^0)to K(x_0)$. The ring structure on $K(S^0)$ and $K(x_0)$ comes from the tensor product $otimes$ of vector bundles.



If $H$ is the canonical line bundle over $S^2$ then $(H-1)^2=0$ where the product comes from $otimes$. The Bott periodicity theorem states that the induced map $mathbb{Z}left[Hright]/(H-1)^2to K(S^2)$ is an isomorphism of rings. So $tilde{K}(S^2)cong mathbb{Z}left[H-1right]/(H-1)^2$, I think, and every square in $tilde{K}(S^2)$ is zero.



The reduced external product gives rise to a map $tilde{K}(S^0)to tilde{K}(S^2)$ which is a ring (?) isomorphism (see e.g. Hatcher Vector Bundles and K-Theory, Theorem 2.11.) but not every square in $tilde{K}(S^2)$ is zero then. How can this be?



Aside from that I do not understand the relation of $otimes:K(X)otimes K(X)to K(X)$ and the composition of the external product with map induced from the diagonal map
$K(X)otimes K(X)to K(Xtimes X)to K(X)$.

Friday, 15 March 2013

ac.commutative algebra - Infinite collection of elements of a number field with very similar annihilating polynomials

For $n>4$, almost all fields of degree $n$ will have $r>1$:



Fix a field $K$ with discriminant $D_0$. Fix the $n-1$ coefficients $b_{n-1},...,b_{i+1}, b_{i-1},..., b_0$. The discriminant of the polynomial $x^n+b_{n-1}x^{n-1}+...$ is a polynomial $D(b_i)$ in the single variable $b_i$, and is of degree at least $4$.



If this polynomial is squarefree, as it will be for almost all $n-1$ fixed coefficients, then the hypersurface $D_0y^2 = D(b_i)$ has genus at least $1$, and hence finitely many integer points.



But, every polynomial defining the same field must have the same discriminant up to a square factor, and hence $r > 1$.



Going back on my comment above: since the degree of the discriminant (multivariate) polynomial is large (linear in the number of variables) the equation $D(b_0,...,b_{n-1}) = D_0y^2$ will probably have only a finite number of solutions for most $D_0$, if $r$ is much smaller than $n$.



Therefore, my new pessimistic conjecture is that for almost all fields you will have $r gg n$.



Note: $r le n-1$ - in any number field there are always an infinite number of algebraic integers with trace 0.

mp.mathematical physics - Asymptotic Matching of an logarithmic Outer solution to an exponential growing inner solution

Your question is a bit vague. Indeed, the standard procedure would involve rejecting the singular solutions and then using a combination of inner and outer expansions to satisfy the boundary conditions. There are various specific methods of achieving the goal (Vishik-Lyusternik and the matched asymptotic expansions are the most popular), but typically, one or several boundary conditions are only satisfied asymptotically (i.e. with an error vanishing as the small parameter tends to its limit, with the error often being exponentially small). Therefore, if some (or all) of your boundary conditions are satisfied only approximately (i.e. only in the limit of vanishing small parameter), this may in principle be the "feature" of the method that you are using. Otherwise, if the boundary conditions cannot be satisfied in this sense, it usually signals of the presence of yet another boundary layer which needs to be accounted for.



Kevorkian and Cole is an excellent source; you may also find helpful Van Dyke's "Perturbation methods in fluid mechanics" (a bit terse), Nayfeh's "Perturbation methods" (textbook) and de Jager and Furu's "The theory of singular perturbations" (good alternative).

Why are there no stars on New Horizons images of Pluto

I followed the New Horizons Mission a little, and saw among others this image of Pluto:



Pluto's eclipse



I wonder, why you can't see any stars on it. As far as my very basic knowledge in astronomy goes, I think you can even see stars during a "Sun-Moon-Eclipse" here on earth. So besides having here a spectacular "Sun-Pluto-Eclipse", I further more ask myself if that eclipse is even necessary to see other stars, as those pictures are already taken quit far away from sun, and outside an atmosphere.



Is that not enough to see stars? Or does NASA remove the stars from the images, before publishing them? (If so, why? Not to distract?)



Thanks for answers!

Thursday, 14 March 2013

star - Why does squinting make hard-to-see objects clearer?

Squinting works the same way as a pinhole camera.



Ideally, light from a single point source entering your eye anywhere on your pupil will be focused on a single spot on your retina. But this works perfectly only if you have perfect vision; otherwise light entering near the top of your pupil may be directed to a slightly different spot on your retina than light entering near the bottom.



By squinting, you block out some of the light from the edges, effectively making your pupil narrower, creating a sharper but dimmer image on your retina. (You may find that it improves the vertical resolution more than the horizontal resolution.)



If you happen to be nearsighted (as I am), you can see a similar effect by looking through a small pinhole, or through a small aperture made with your fingers. If the light is bright enough, you'll see a dimmer but sharper image.



The irregular shapes may be interference from your eyelashes.

dobsonian telescope - Cancelling out earth rotation speed, Altazimuth mount

Given ra, dec, lat, lon in radians, and d in number of fractional days
since '1970-01-01 00:00:00 UTC', the azimuth of a star is:



$-cot ^{-1}(tan (text{dec}) cos (text{lat}) sec (text{d1}+text{lon}-text{ra})+sin (text{lat}) tan (text{d1}+text{lon}-text{ra}))$



and the elevation is:



$tan ^{-1}left(frac{sin (text{dec}) sin (text{lat})-cos (text{dec}) cos (text{lat}) sin (text{d1}+text{lon}-text{ra})}{sqrt{(cos (text{dec}) sin (text{lat}) sin (text{d1}+text{lon}-text{ra})+sin (text{dec}) cos (text{lat}))^2+cos ^2(text{dec}) cos ^2(text{d1}+text{lon}-text{ra})}}right)$



where d1 is: $frac{pi (401095163740318 d+11366224765515)}{200000000000000}$



(you will need to resolve the ambiguity in the arc(co)tangent, but
this isn't difficult).



These formulas aren't as daunting as they seem, since, for you, lat,
lon, ra, and dec will be fixed, and the only thing that changes is d.



Hope this helps, but I'm worried that it just demonstrates how complicated these formulas are.

Wednesday, 13 March 2013

If the fourier transformed of f is odd, is f odd?

Look at $L^2$ first: In $L^2$ the FT is diagonalizable. The space of odd functions $in L^2$ is the direct sum $Eig(mathcal{F},+i)oplus Eig(mathcal{F},-i)$ of the eigenspaces of $mathcal{F}$ with respect to the eigenvalues $+i$ and $-i$. Because eigenspace are mapped into themselves, $mathcal{F}$ maps odd functions to odd functions.



In particular this is true for all $fin L^1cap L^2$ and by continuity it is true for all $fin L^1$. (In fact an analogue statement is true for all tempered distributions.)



EDIT: Oh, I just saw that you asked for the other direction. Using the same argument you can show that $mathcal{F}f$ odd $implies f=mathcal{F}^3(mathcal{F}f)$ odd is true for all $fin L^2$. This time I'm not quite sure if it is possible to extend this from $L^1cap L^2$ to $L^1$, but maybe the result for $L^1cap L^2$ is useful for you too.

Tuesday, 12 March 2013

amateur observing - Is it difficult to see DSO in your eyepiece?

It's a pretty broad topic. It depends a bit on the type of DSO. Speaking in general, most DSOs lack brightness, and so to observe them you need two things:



  • lack of light pollution

  • lots of aperture

Light pollution



Observing from the city, DSOs are challenging. The further away you are from city lights, the better your views. Deserts, national parks, sparsely populated areas are all good. Driving at least 1 hour away from the city usually provides very noticeable improvements. Light pollution maps help in finding places with a dark sky.



http://www.jshine.net/astronomy/dark_sky/



http://www.lightpollutionmap.info/



Aperture



The bigger the aperture, the better you can see DSOs. If you're a DSO hunter, you must be focused on increasing aperture before you do anything else. What helps here is having an instrument with a good aperture / cost ratio, such as a dobsonian reflector.



There's essentially no limit here - more is always better.



It should be noted that a larger aperture will always perform better than a smaller one, even when light pollution is very heavy - but of course it's best if both aperture and dark sky cooperate.




Other factors that may help:



Filters



These are the most over-utilized and over-hyped accessories, especially for beginners (most people are too much focused on filters and not enough focused on things that really matter). That being said, some nebulae do look better in some filters.



Dave Knisely is one of the foremost experts in the amateur astronomy community in terms of DSOs and filters. Read anything he writes on the subject, e.g. this long article:



http://www.prairieastronomyclub.org/resources/by-dave-knisely/filter-performance-comparisons-for-some-common-nebulae/



In a nutshell: If you can only get one DSO filter, get a UHC. If you can get two, also get an OIII filter. Of course, dark sky and large aperture should always come first. Don't bother with filters if you're observing from the city using a small aperture.



Filters are useful for nebulae. They don't help with galaxies, star clusters, individual stars, etc.



Books



Find a book called Turn Left At Orion. It's a fantastic introduction for beginners to DSOs. It will help you find the easier targets, and essentially open up the sky for you. Later you'll be able to figure things out on your own.



http://www.amazon.com/Turn-Left-Orion-Hundreds-Telescope/dp/0521153972/




It should be noted that the entire Messier catalog can be observed in a relatively small telescope even in the city - and all Messier objects are DSOs. I've seen the Dumbbell Nebula (M27) and the Ring Nebula (M57) in 6" of aperture in a very light-polluted place (Silicon Valley). So, no, it's not that hard to observe DSOs.



Of course, if the sky is dark and/or the aperture large, all DSOs look better. This is especially true of globular clusters, such as M13. They already look pretty spectacular in any aperture, even in a small scope - but the view is mind boggling in a very large dob.



M13 aperture comparison

artificial satellite - Did I see a meteor or a re-entry?

Last night (January 21, possibly 5:30 UTC), from my house in northern California, I saw a moving point of light in the sky, going west to east, crossing about 120 degrees of sky in 5 seconds or so.



It didn't look like my experience of meteors - thicker and slower, and quite bright. I think it survived until it was out of sight entirely, rather than burning out.



Aerospace.org's reentry predictions doesn't show anything for that time.



I've seen ISS and other satellites; this was a lot larger and brighter, and my favorite satellite pass schedule also shows nothing.



Was this more likely a rocket body re-entry, or a large meteor?

amateur observing - How can I safely observe a Solar Flare?

Solar flares are explosive events that last not too long, so actually, your bigger problem will be to be able to see them when they are happening.



Normally they are classified by their X-ray flux, and they can be either X, M, C, B or A class which corresponds to the logarithmic scale of $Watts/m^2$ ($10^{-4}, 10^{-5}, 10^{-6}, 10^{-7}, 10^{-8}$ respectively).
You can understand better what I mean by looking at the real time lightcurves obtained from GOES satellite



These peaks have different duration depending of the flare, there are some records of flares lasting few hours where the shortest can be shorter than a minute.



All the above applies to X-ray flares, which we cannot seen from Earth, we need to put some instrument on board spacecraft or rocket so we go above the part of the atmosphere that absorbs them.



Now, if you want to see them by using an optical telescope you have two options, either using the whole visible range by projection or through an Hydrogen alpha filter. In this case the flare classification is different than in the X-ray case. Where the flare class in X-ray is measured by the enhanced of flux produced, the optical class is based by the area it covers. The Solar Influence Data Analysis Center offers a table with their properties and the corresponding X-ray flare type:



---------------------------------------------------------------
| Area | Area | Class | Typical corresponding |
| (sq deg) | (10^-6 solar A) | | SXR Class |
---------------------------------------------------------------
| <= 2.0 | <= 200 | S | C2 |
| 2.1-5.1 | 200-500 | 1 | M3 |
| 5.2-12.4 | 500-1200 | 2 | X1 |
| 12.5-24.7 | 1200-2400 | 3 | X5 |
| >24.7 | > 2400 | 4 | X9 |
---------------------------------------------------------------


For example, the famous Carrington's event (Sept. 1, 1859) was observed and hand-drawn from Carrington by projection. But, as this article says: "He was lucky enough to be in the right place at the right time..." you need too to be quite lucky to observe one of these.

Monday, 11 March 2013

ag.algebraic geometry - volume of big line bundles under finite morphisms

Yes, this is true, even in a slightly more general context (the varieties can be defined over any field $k$, and the morphism only needs to be generically finite and surjective).



The main parts of the argument are given in the books of Lazardfeld (Positivity in Algebraic geometry) and Debarre (Higher-dimensional Algebraic Geometry), although this specific property is never stated there (as far as I know).



By the projection formula, we have
$H^0(X,f^* B^{otimes m}) cong H^0(Y,f_* mathcal O_X otimes B^{otimes m}),$
so we can restrict our attention to $f_* mathcal O_X$ and its twists by $B$. There is an open dense subset $U$ of $Y$ such that $f_* mathcal O_X$ is free of rank $d = deg(f)$, so $(f_* mathcal O_X)|_U simeq mathcal O_U^d$. This isomorphism gives an injection $f_* mathcal O_X hookrightarrow mathcal K_Y^d$, where $mathcal K_Y$ is the sheaf of total quotient rings of $mathcal O_Y$. Set $mathcal G = f_* mathcal O_X cap mathcal O_Y^d$ and define $mathcal G_1$ and $mathcal G_2$ by the exact sequences of sheaves



$0 to mathcal G to f_* mathcal O_X to mathcal G_1 to 0$ ,



$0 to mathcal G to mathcal O_Y^d; to mathcal G_2 to 0.$



The supports of $mathcal G_1$ and $mathcal G_2$ do not meet $U$, hence have dimension less than $n$. Therefore,



$h^0(Y, mathcal G_iotimes B^{otimes m}) := dim_k H^0(Y, mathcal G_iotimes B^{otimes m}) = O(m^{n-1})$



for $i=1,2$ (see e.g. Proposition 1.31 in Debarre's book). Using the long exact cohomology sequence for the two short exact sequences above twisted by $B^{otimes m}$, this implies



$h^0(X,f^* B^{otimes m}) = h^0(Y,(f_*mathcal O_X)otimes B^{otimes m})= d cdot h^0(Y, B^{otimes m}) + O(m^{n-1}),$



from which the assertion follows.



See also Proposition 4.1 in this arXiv paper.

dg.differential geometry - Analogy of Liouville conformal mapping theorem with Mostow rigidity?

I often hear mention of two theorems, Mostow's rigidity theorem and Liouville's theorem on conformal mappings, which superficially sound similar: they say that a set of geometric structures is, if nonempty, big in dimension 2, but small in dimension greater than 2.



(For Mostow's theorem, the set of structures in question is the set of hyperbolic metrics on a manifold; for Liouville's, it's the set of germs of flat metrics in a conformal equivalence class.)



I know that hyperbolic and conformal geometry are closely connected, at least in dimension 2. I'm curious as to whether this analogy is hinting at one such connection. Is there a "good reason" for this analogy?

ag.algebraic geometry - functorial meaning of irreducibility of a moduli space

Your question is a bit vague, but let me try to say something. Being irreducible is a global property so there can be no local characterization like being smooth, regular, etc. If $mathcal{M}$ is the "space" representing your functor, we say that it is irreducible if it admits a surjective map from an irreducible variety (this is just topological).



If you think about what it means to be irreducible, we could also say that any two closed points of $mathcal{M}$ can be connected by an irreducible variety. From the functorial point of view, this would mean that any two objects being parametrized live in a family over some irreducible variety. So for example, saying that $M_g$ (the moduli space of smooth geneus $g$ curves is irreducible, is the same as saying that for any two smooth curves $C_1, C_2$ of genus $g$ (say over a field), there is a family $f: S rightarrow B$ such that $B$ is irreducible, every geometric fiber is a smooth genus $g$ curve, and both $C_1$ and $C_2$ are fibers of $f$. In fact, you can take $B$ to be a curve.

Sunday, 10 March 2013

na.numerical analysis - Finding all roots of a polynomial

This argument is problematic; see Andrej Bauer's comment below.




Sure. I have no idea what an efficient algorithm looks like, but since you only asked whether it's possible I'll offer a terrible one.



Lemma: Let $f(z) = z^n + a_{n-1} z^{n-1} + ... + a_0$ be a complex polynomial and let $R = text{max}(1, |a_{n-1}| + ... + |a_0|)$. Then all the roots of $f$ lie in the circle of radius $R$ centered at the origin.



Proof. If $|z| > R$, then $|z|^n > R |z|^{n-1} ge |a_{n-1} z^{n-1}| + ... + |a_0|$, so by the triangle inequality no such $z$ is a root.



Now subdivide the disk of radius $R$ into, say, a mesh of squares of side length $epsilon > 0$ and evaluate the polynomial at all the lattice points of the mesh. As the mesh size tends to zero you'll find points that approximate the zeroes to arbitrary accuracy.



There are also lots of specialized algorithms for finding roots of polynomials at the Wikipedia article.

oc.optimization control - how to get a feasible solution to dual program from a feasible solution to primal program?

I don't know what "close to" really means, so let's suppose that you simply did obtain the optimal solution $vec{x}$. In the generic situation, the objective $f(vec{x})$ is a unique linear combination of the inequalities of the original program that are equalities for $vec{x}$. The coefficients of that linear combination are the dual solution $vec{y}$, and the linear combination proves the optimal value of $f(vec{x})$. For example, suppose that in two dimensions the constraints are
$$c_1(vec{x}) = x_1 + 2x_2 le 1 qquad c_2(vec{x}) = 2x_1 + x_2 le 1,$$
and the objective is $f(x) = 3x_1 + 3x_2$. Then $f = c_1 + c_2$, and this expression proves that $f(vec{x}) le 2$, which is the optimum. In this example, $vec{x} = (frac13,frac13)$ and $vec{y} = (1,1)$.



If "close to" means that only the minimum number of constraints are close to equalities, then you can use the same principle.



If many constraints are equalities at the optimum $vec{x}$, then there are many ways to take linear combinations of them to obtain the objective $f$. Some of these combinations have non-negative coefficients and some do not. The coefficients $vec{y}$ of any non-negative combination are an optimum for the dual program. Finding a non-negative combination is exactly the question of finding a feasible point in a second linear program that can be anything. Geometrically speaking, $vec{x}$ is a vertex of a polytope $P$ of feasible points. You would like a positive linear combination of the facet equations at $vec{x}$ to match a supporting hyperplane that corresponds to the objective $f$. This is a matter of finding a feasible point to a certain problem which is dual to the cone at $vec{x}$. This cone can be any convex polytopal cone in principle; so finding $vec{y}$ is a general linear programming problem.

Saturday, 9 March 2013

star systems - How many nested stable (1 Mio years) orbits are theoretically possible?

If we look at our solar system we have the sun which has planet orbits and planets have moon orbits.



If we look at the possible size difference from hypergiants to small asteroids why shouldn't it be possible to nest orbits to an astounding degree: supergiant orbit a hypergiant, star orbits supergiant and so on ?



My first guess is that the size and distance must be so small in contrast to the primary that the gravitational gradient of the primary does not cause too much disturbance, but it should be possible.



Has anyone tried to find it out with celestial mechanics ? While it may be completely theoretical, it is IMHO a solid question and it would be nice to find out how deep celestial bodies can be nested and how it limits stability.



Some conditions:



  • The orbits should be (very probably) stable for one million years.
    Hypergiants are burning fast, so I limit the stability for a relative short timeframe.


  • The barycenter of both satellite and its primary are inside the primary.
    No double system.


  • The first primary should be a star, not a black hole or a neutron star.


orbit - Advancement of perihelion, data

The precession of the perihelion of the planets depends mostly on the effects of the other planets perterbations, but also to a lesser extent on relativistic effects, the aspherical shape of the sun and solar tides. [Wikipedia]



This calculation of the effects of pertubations includes a comparison of the actual and observed rates of precession of perihelion. It gives rates in arcseconds per year:



Mercury 5.75
Venus 2.04
Earth 11.45
Mars 16.28
Jupiter 6.55
Saturn 19.5
Uranus 3.34
Neptune 0.36


For Mercury the observed rate of precession is rather more than would be predicted from perturbations. The reason for this is general relativity, and was one of the first pieces of observational evidence for relativity.

Friday, 8 March 2013

What to look for in a tripod for binoculars?

Get the sturdiest tripod that you can!



My 20x80 binoculars are sitting on a ball and socket mount on a very heavy tripod. I use the Benro BH1 ball mount, which is an absolute joy to aim. This mount is rated for 6 KG, almost three times the actual weight of the binoculars. With high-powered binoculars the aim must be adjusted every minute or so, therefore you will want an easy to adjust mount that is far from the limits of its capacity. You really can't beat the ball and socket design for this purpose in my opinion. I need to photograph the setup for another question and when I do I'll check what model the tripod is.



Look also at the tripod of the user who asked the above-linked question. These large binoculars need the absolute heaviest duty tripod that you can reasonably afford. You will need all-metal construction and I would suggest a hook as well to hang a weight from, this will make the tripod steadier. You will have a hard time going overboard, so if you find two tripods with a similar price but one is heavier-duty, get that one.

real analysis - Looking for an interesting problem/riddle involving triple integrals.

I do not know the exact location in his Collected Works but Dirichlet found the $n$-volume of



$$ x_1, x_2, ldots, x_n geq 0 $$
and
$$ x_1^{a_1} + x_2^{a_2} + ldots + x_n^{a_n} leq 1 .$$



For example with $n=3$ the volume is
$$ frac{ Gamma left( 1 + frac{1}{a_1} right) Gamma left( 1 + frac{1}{a_2} right) Gamma left( 1 + frac{1}{a_3} right) }{ Gamma left(1 + frac{1}{a_1} + frac{1}{a_2} + frac{1}{a_3} right) } $$



Note that this has some attractive features. The limit as $ a_n rightarrow infty $ is just the expression in dimension $n-1,$ exactly what we want. Also, we quickly get the volume of the positive "orthant" of the unit $n$-ball by setting all $a_j = 2,$ and this immediately gives the volume of the entire unit $n$-ball, abbreviated as
$$ frac{pi^{n/2}}{(n/2)!} $$



I think he also exactly evaluated the integral of any monomial
$$ x_1^{b_1} x_2^{b_2} cdots x_n^{b_n} $$ on the same set.



So the question would be: given, say, positive integers $a,b,c,$ find the volume of $x,y,z geq 0$ and $ x^a + y^b + z^c leq 1.$ If you like, fix the exponents, the triple $a=2, b=3, c=6$ comes up in a book by R.C.Vaughan called "The Hardy-Littlewood Method," page 146 in the second edition, where he assumes the reader knows this calculation.



This came back to mind because of a recent closed question on the area of $x^4 + y^4 leq 1.$

How does neutron star collapse into black hole?

A neutron star must have a minimum mass of at least 1.4x solar masses (that is, 1.4x mass of our Sun) in order to become a neutron star in the first place.
See Chandrasekhar limit on wikipedia for details.



A neutron star is formed during a supernova, an explosion of a star that is at least 8 solar masses.



The maximum mass of a neutron star is 3 solar masses. If it gets more massive than that, then it will collapse into a quark star, and then into a black hole.



We know that 1 electron + 1 proton = 1 neutron;



1 neutron = 3 quarks = up quark + down quark + down quark;



1 proton = 3 quarks = up quark + up quark + down quark;



A supernova results in either a neutron star (between 1.4 and 3 solar masses), a quark star(about 3 solar masses), or a black hole(greater than 3 solar masses), which is the remaining collapsed core of the star.



During a supernova, most of the stellar mass is blown off into space, forming elements heavier than iron which cannot be generated through stellar nucleosynthesis, because beyond iron, the star requires more energy to fuse the atoms than it gets back.



During the supernova collapse, the atoms in the core break up into electrons, protons and neutrons.



In the case that the supernova results in a neutron star core, the electrons and protons in the core are merged to become neutrons, so the newly born 20-km-diameter neutron star containing between 1.4 and 3 solar masses is like a giant atomic nucleus containing only neutrons.



If the neutron star's mass is then increased, neutrons become degenerate, breaking up into their constituent quarks, thus the star becomes a quark star; a further increase in mass results in a black hole.



The upper/lower mass limit for a quark star is not known (or at least I couldn't find it), in any case, it is a narrow band around 3 solar masses, which is the minimum stable mass of a black hole.



When you talk about a black hole with a stable mass (at least 3 solar masses), it is good to consider that they come in 4 flavors: rotating-charged, rotating-uncharged, non-rotating-charged, non-rotating-uncharged.



What we would see visually during the transformation would be a hard radiation flash.
This is because during the collapse, the particles on/near the surface have time to emit hard radiation as they break up before going into the event horizon; so this could be one of the causes of gamma ray bursts (GRBs).



We know that atoms break up into protons, neutrons, electrons under pressure.



Under more pressure, protons and electrons combine into neutrons.



Under even more pressure, neutrons break down into quarks.



Under still more pressure, perhaps quarks break down into still smaller particles.



Ultimately the smallest particle is a string: open or closed loop, and has a Planck length, which is many orders of magnitude smaller than a quark. if a string is magnified so it is 1 millimeter in length, then a proton would have a diameter that would fit snugly between the Sun and Epsilon Eridani, 10.5 light years away; that's how big a proton is compared to a string, so you can imagine there are perhaps quite a few intermediate things between quarks and strings.



Currently it looks like several more decades will be needed to figure out all the math in string theory, and if there is anything smaller than strings then a new theory will be required, but so far string theory looks good; see the book Elegant Universe by Brian Greene.



A string is pure energy and Einstein said mass is just a form of energy, so the collapse into a black hole really breaks down the structure of energy that gives the appearance of mass/matter/baryonic particles, and leaves the mass in its most simple form, open or closed strings, that is, pure energy bound by gravity.



We know that black holes (which are not really holes or singularities, as they do have mass, radius, rotation, charge and hence density, which varies with radius) can evaporate, giving up their entire mass in the form of radiation, thus proving they are actually energy. Evaporation of a black hole occurs if its mass is below the minimum mass of a stable black hole, which is 3 solar masses; the Schwarzschild radius equation even tells you what the radius of a black hole is given its mass, and vice versa.



So you could transform anything you want, such as your pencil, into a black hole if you wanted to, and could compress it into the required size for it to become a black hole; it is just that it would immediately transform itself (evaporate) completely into a flash of hard radiation, because a pencil is less than the stable black hole mass (3 solar masses).



This is why the CERN experiment could never have created a black hole to swallow the Earth - a subatomic black hole, even one with the mass of the entire Earth, or the Sun, would evaporate before swallowing anything; there is not enough mass in our solar system to make a stable (3 solar mass) black hole.



A simple way for a neutron star to become more massive in order to be able to turn into a black hole is to be part of a binary system, where it is close enough to another star that the neutron star and its binary pair orbit each other, and the neutron star siphons off gas from the other star, thus gaining mass.



Cataclysmic variable binary



Here is a nice drawing showing exactly that.



Matter falling into a black hole is accelerated toward light speed. As it is accelerated, the matter breaks down into subatomic particles and hard radiation, that is, X-rays and gamma rays. A black hole itself is not visible, but the light from infalling matter that is accelerated and broken up into particles is visible. Black holes can also cause a gravitational lens effect on the light of background stars/galaxies.

Thursday, 7 March 2013

What are the differences between TESS and PLATO exoplanet telescopes?

Here is what one commentator has said about TESS in comparison to PLATO:



The Transiting Exoplanet Survey Satellite (TESS), to launch in 2017, seems superficially to be a similar mission to Plato. It will potentially discover hundreds of planets before Plato even gets off the ground in 2024. However, the limited sensitivity of its cameras mean it is completely blind to Earth-like worlds around sun-like stars. Astroseismology is also off-limits for TESS, meaning the size of any worlds it does discover will be highly uncertain. Unlike Plato, it will also move between patches of sky every 30 days, allowing only hot, short-period planets to be found.
With all other new telescopes, both in space and on the ground, limited to finding super-Earths around small stars, Plato is the only mission on the table truly capable of discovering an Earth-like world around a star like our Sun. And by targeting bright stars that allow atmospheric follow-up, it is not impossible to think that, as well as the first truly habitable planet, Plato could find the first inhabited one too.



Credit: Hugh Osborn, PhD student, University of Warwick



http://lostintransits.wordpress.com/2014/01/30/what-can-plato-do-for-exoplanet-astronomy/

astrobiology - Could the James Webb Space Telescope detect biosignals on exoplanets?

From what I understand, James Webb, if used in conjunction with a successful starshade (being developed at MIT), should be able to detect close in planets orbiting nearby stars. However, getting good atmospheric spectra of these planets directly (from planet's blackbody IR emission) is unlikely. What we must hope for is that TESS, which should be going up in 2017, will find a few nearby stars with transiting planets. Then, James Webb will be able to look for atmospheric absorption lines from stellar light passing through a planet's atmosphere during transit. This method may still be limited to large (Jupiter size) planets.
In an ideal situation (say looking at absorption lines of a super-earth) there are many "bio-signatures", but one of the easiest to detect would be an ozone line in the infrared. By itself, this would not be proof, even though there would need to be a constant replenishing source of O2 in the atmosphere to maintain the O3. If methane could also be found we could rightly get VERY excited since methane and oxygen don't co-exist very well.

Wednesday, 6 March 2013

soft question - Which mathematicians have influenced you the most?

Richard Courant. Several years before I started studying mathematics in earnest, I spent a summer working through his calculus texts. Only recently, on re-reading them, have I come to realize how much my understanding of calculus, linear algebra, and, more generally, of the unity of all mathematics and, to use Hilbert's words, the importance of "finding that special case which contains all the germs of generality," have been directly inspired by Courant's writings.



From the preface to the first German edition of his Differential and Integral Calculus:




My aim is to exhibit the close connexion between analysis and its applications and, without loss of rigour and precision, to give due credit to intuition as the source of mathematical truth. The presentation of analysis as a closed system of truths without reference to their origin and purpose has, it is true, an aesthetic charm and satisfies a deep philosophical need. But the attitude of those who consider analysis solely as an abstractly logical, introverted science is not only highly unsuitable for beginners but endangers the future of the subject; for to pursue mathematical analysis while at the same time turning one's back on its applications and on intuition is to condemn it to hopeless atrophy. To me it seems extremely important that the student should be warned from the very beginning against a smug and presumptuous purism; this is not the least of my purposes in writing this book.




Another example: while not a "linear algebra book" per se, I have yet to find a better introduction to "abstract linear algbera" than the first volume of Courant's Methods of Mathematical Physics ("Courant-Hilbert"; so named because much of the material was drawn from Hilbert's lectures and writings on the subject). His one-line explanation of "abstract finite-dimensional vector spaces" is classic: "for n > 3, geometrical visualization is no longer possible but geometrical terminology remains suitable."



Lest one be misled into thinking Courant saw "abstract" vector spaces as "$mathbb{R}^n$ in a cheap tuxedo," he introduces function spaces in the second chapter ("series expansions of arbitrary functions"), and most of the book is about quadratic eigenvalue problems, or, as Courant saw it, "the problem of transforming a quadratic form in infinitely many variables to principal axes."



As a final example: Courant's expository What is Mathematics? is perhaps best described as an unparalleled collection of articles carefully crafted to serve as an object at which one can point and say "this is." Moreover, while written as a "popularization," its introduction to constrained extrema problems is, without question, a far, far better introduction than any textbook I've ever seen.



I should also mention Felix Klein, not only because Klein's views on "calculus reform" so clearly influenced both the style and substance of Courant's texts, but since a number of Klein's lectures have had an equally significant influence on my own perspective. For those unfamiliar with the breadth of Klein's interests, I'm tempted to say "his Erlangen lecture, least of all" (not that there's anything wrong with it).



Lest my comments be mistaken for a sort of wistful "remembrance of things past," I'd easily place Terence Tao's writings on par with Courant's, for many of the same reasons: clear and concise without being terse, straightforward yet not oversimplified, and, most importantly, animated by a sort of — je ne sais quoi — whatever it is, it seems to involve, in roughly equal proportions: mastery of one's own craft, a genuine desire to pass it on, and the considerable expository skills required to actually do so.



Finally, I can't help but mention Richard Feynman in this context, and to plug his Nobel lecture in particular. While not a mathematician per se, Feynman surely ranks among the twentieth century's best examples of a "mathematical physicist" in the finest sense of the term, not merely satisfied by a purely mathematical "interpretation" of physical phenomena, but surprised, excited, and, dare I say, delighted by the prospect! Moreover, he was equally excited about mathematics in general, see, e.g., the "algebra" chapter in the Feynman Lectures on Physics.

galaxy - What is the exact position of the Large Magellanic Cloud?

First of all: your coordinate system (as I understand it) is not "well defined". You only provide a line (Earth --> Mily way centre) and define that as the North-South axis. You don't give a reference for the east-west axis or the up-down axis. So, I'll assume that the East-West axis lies along the plane of the Milky way and up is where the Boötes Dwarf system is. This way, our coordinate system is equal to this image from the LMC Wikipedia site.



LMC position



The problem with this picture is that the position of the Earth is only implicitly given. It is "behind" the center of the Milky Way. So from that position, the LMC is almost to our right (east) and a bit below the plane of the Milky way; about 30 to 40 degrees down.



This is also nicely validated by this image, acquired here (note that the grid is in 30 degree angles).



position of the LMC



So, finally we can answer your question in the way you wanted: in the north-east to east segment, and below its plane.

Tuesday, 5 March 2013

observation - Why can you see the space station on some days but not on others?

Two basic conditions need to be met for anyone on the globe to observe the International Space Station (ISS) from any location:



  • ISS needs to pass roughly overhead of your location, and

  • it needs to do that during night so it's visible to the naked eye

Now, obviously there are other requirements, like e.g. weather (if it's overcast we won't be able to see it through the clouds), but those screens you attach don't account for that, since it would be nigh impossible to predict weather so far in advance.



Let's discuss the two basic conditions mentioned a bit more. ISS has an orbital inclination of 51.65°. Why is that is explained in this answer on Space.SE, but to quickly recap, this orbital inclination was chosen because it's near ideal for launches from the Baikonur Cosmodrome, where most launches towards the ISS are made from, it is still achievable from the US launch facilities and most US built launchers, and it allows the ISS to go over majority of the populated areas on Earth. So it was decided both for economic reasons, as well as to allow ISS to do as much science as possible.



But what this inclination means is that the ISS will make a ground track (path along the surface of the Earth directly below it) that moves towards the West with each new orbit, even though the ISS itself moves West to East (or saying it otherwise, it's in a prograde Low-Earth orbit). Its ground track could look something like this:



              enter image description here



                  Ground track of the International Space Station with an area where it is observable from the Earth's surface
                  shown in the white circle (Image source: ISS Tracker)



ISS orbital period, the time it takes it to circumnavigate the globe and complete one orbit, is roughly 93 minutes. As you can see in the screen grab from the ISS Tracker above, its ground track moves westward on each new orbit, which is due to Earth's rotation on its axis, so this shift towards the West will be equal to how much the Earth rotates in the meantime, which in those 93 minutes comes out at roughly 22,5° or a good 2,505 km (1,556 mi) of the Earth's total equatorial circumference.



But ISS's orbit isn't Sun-synchronous, meaning it won't be over any given Earth latitude at the same local mean solar time, so when it will be observable depends also on its orbital position relative to the position of the Sun. We require that it is dark, remember? ISS isn't bright enough (doesn't reflect enough light with its truss and solar panels) during the day to be brighter than the day skies are. So we have another parameter, which is local time. As you probably anticipated by now, the length of the day isn't really equal at any day of the year, and moving further away from the equator only increases this difference of the length of the day in winter, or the length of it during summer. This is another range or values that your software you're attaching print screens from has to account for.



So, when your software collects data from you regarding your geolocation and current date and time (might do that automatically if you're using it on a device that has GPS enabled), it will establish the times (timetable) of when the ISS ground track is over or around your area, and if that happens during night. The ISS doesn't have to be exactly overhead, at zenith, so it will also calculate angle to the zenith for any area that the ISS goes over, at which you would still be able to see it.



Understanding this, even if we don't look at your timetable, we could establish that if you're at latitude to the equator that enables you to see the ISS, you'd be able to do that roughly a few times in a row separated by around 93 minutes, then wait for the ISS ground track to come back towards your longitude (16 orbits or about one day and an hour), and do that during night when you'd be able to observe it. Nights are long now during December at Northern latitudes, so you see a long row of couples of such chances during each night. As long as the weather holds, of course.

ct.category theory - First: upper-star, then: lower-star, finally: lower-shriek

First of all: yes, there's certainly a connection. See http://ncatlab.org/nlab/show/polynomial+functor. If the base category is $Set$, the composite



$$Set/W stackrel{f^ast}{to} Set/X stackrel{g_ast}{to} Set/Y stackrel{h_!}{to} Set/Z$$



first takes a $W$-indexed set $S_w$ to an $X$-indexed set $T_x = S_{f(x)}$, then takes this to the $Y$-indexed set $U_y = prod_{x: g(x) = y} T_x$, then takes this to the $Z$-indexed set $V_z = sum_{y: h(y) = z} U_y$. Putting this together, the composite is a family of polynomials, each a sum of monomial terms



$$P(ldots, S_w, ldots) = (z mapsto sum_{y in h^{-1}(z)} prod_{x in g^{-1}(y)} S_{f(x)})$$



I'll give a quick example. Suppose we want to express the free monoid functor



$$F(S) = sum_{n geq 0} S^n$$



in this form. Then we take $W = 1$, $X = mathbb{N} times mathbb{N}$, $Y = mathbb{N}$, $Z = 1$. There's only one choice for $f$ and $h$, and $g$ is rigged so that the fiber over $n in mathbb{N}$ is an $n$-element set: $g(m, n) = m + n + 1$. One can easily check this works.



As for the other question: it would have been nice if you had "bored" us! Because I don't see how to reconstruct what you did. What I have to get the image is a zig-zag of length 4



$$Set^{[1]} stackrel{F^ast}{to} Set^{C_1} stackrel{G_ast}{to} Set^{C_2} stackrel{H^ast}{to} Set^{C_3} stackrel{J_!}{to} Set^{[0]}$$



where $C_1$ is the generic cospan $a to c leftarrow b$, $C_2$ is the generic commutative square, $C_3$ is the generic span $a leftarrow d to b$, and then $G$ and $H$ are the evident inclusion functors, and $F$ takes each arrow of the generic span to the arrow of $[1]$. Then $F^ast$ takes $p: A to B$ to the cospan consisting of two copies of $p$; hitting this with $G_ast$ takes this cospan to the pullback square (pulling back $p$ against itself); hitting this with $H^*$ restricts the pullback square to the span consisting of the pullback projections; finally, hitting this with $J_!$ takes this span to its colimit = pushout, which is the same as the coequalizer of the pullback projections (because they have a common right inverse). (Based on his comment, I'm guessing that some guy on the street was doing more or less the same thing.)



Could you tell us what you had in mind?

Monday, 4 March 2013

cosmological inflation - What was the age of the universe when the average density was one atmosphere?

The critical density of the universe is
$$rho_c=frac{3H_0^2}{8pi G},$$
with $H_0=67.8 frac{mbox{km}/mbox{s}}{mbox{Mpc}}$ the Hubble constant and $G=6.673848cdot 10^{-11}frac{mbox{m}^3}{mbox{kg }mbox{s}^2}$ Newton's gravitational constant.
Hence with one parsec $1mbox{pc}=3.0857×10^{16} mbox{m}$,
$$rho_c=frac{3cdot (67.8cdot 10^3cdot frac{mbox{m}/mbox{s}}{3.0857×10^{22} mbox{m}})^2}{8picdot 6.673848cdot 10^{-11}frac{mbox{m}^3}{mbox{kg}mbox{s}^2}}=8.635 cdot 10^{-27}frac{mbox{kg}}{mbox{m}^3}.$$



Ordinary matter content is $4.9%$, the universe has total density close to critical density, hence ordinary matter density is
$$0.049cdot 8.635 cdot 10^{-27}frac{mbox{kg}}{mbox{m}^3}=4.23 cdot 10^{-28}frac{mbox{kg}}{mbox{m}^3}.$$
(The Hubble constant is not precisely known, so the calculated density depends a bit on the precise value.)



Density of water:
The density of water is about $10^3 mbox{kg}/mbox{m}^3$.
That's a factor $4.23 cdot 10^{31}$ denser than the average universe.
The space needs to have been reduced by a factor $sqrt[3]{4.23 cdot 10^{31}}=3.48cdot 10^{10}$ in each if the three spatial dimensions. That's the inverse of the scale factor, hence corresponds to a redshift of $z=3.48cdot 10^{10}$.



Using the Cosmology Calculator on this website, the cosmological parameters $H_0 = 67.11$ km/s/Mpc (slightly different to the above value),
$Omega_{Lambda} = 0.6825$ provided by the Planck project, and setting $Omega_M = 1- Omega_{Lambda} = 0.3175$ the age of the universe was 0.022 seconds at the redshift $z=3.48cdot 10^{10}$, hence when it was of the average density of water.



Density of air at sea level:
The density of air at sea level is $1.225 frac{mbox{kg}}{mbox{m}^3}$. The corresponding redshift is about $z=1.423cdot 10^9$ yielding an age of the universe of 12.68 seconds.

Sunday, 3 March 2013

What practical considerations are there for amateur observations of transiting exoplanets?

This is actually quite straightforward with digital CCD's (it used to be quite tricky with film cameras as you'd have to carefully develop film that moved past the lens and assess the width of the trail)



Get yourself a good telescope - a 12" Dobsonian or above if you want to give yourself a good chance of picking out the fluctuations against the noise background. Then select a decent CCD. Five hundered pounds gets a reasonable one, but expect to pay a couple of thousand pounds for a cooled CCD, which will also help to reduce noise.



(Buying in US dollars? A reasonable CCD runs about $1000. A cooled CCD will cost you at least $1500.)



You'll want a good quality equatorial mount, with computer controlled servos to track the target smoothly over long periods of time.



Ideally you will also slave a second telescope and CCD, pointed along the same path but slightly off - this will help you cancel out cloud and other fluctuations from our own atmosphere.



Oh, and get as far away from a city as possible - up into the mountains can be a good plan :-)



Then arrange your viewing for a series of full nights. The more data points you can get, the better the noise reduction. Imagine the exoplanet is orbiting every 100 days, in order to get any useful data, you will need to track it over some multiple of 100 days. So assume you set up to track your target star for 2 years, plan for 3 or 4 star shots per night to give you a range of data points.



These 600+ days of 4 data points per night gives you a reasonable stack of data - the challenge now is to work out whether there are any cyclic variations. Various data analysis tools can do this for you. As a first step, if you find a cycle around 365 days, it probably isn't the target, so try and normalise for this (of course this will make it very difficult to discover exoplanets with a period of exactly 1 year)