Sunday, 30 June 2013

Exoplanet naming conventions - Astronomy

I'm looking for any exoplanet nomenclature, official, semi-official, proposed, or just a good idea, beyond what can already be found on Wikipedia at



http://en.wikipedia.org/wiki/Exoplanet#Other_naming_systems



Or via the obvious Google searches.



This is a vaguely sci-fi related request. Lowercase Latin alphabet isn't going to do it once we get out there, and I like the proposal for Greece-roman mythology but there are simply too many planets now, and will be more over time.



As a side issue I'm already familiar with Ian Banks Culture addressing nomenclature, so I accept that humanity being what it is, we can expect about a hundred or so "Vulcan" orbiting various different stars, much as there are 37 "Greenville"s in the USA, currently.



A pointer to a really great fictional idea or naming scheme aside from the schemes mentioned about might be useful.



Interesting technical (non literary) naming schemes beyond the most obvious and trivial ordinal systems could be interesting.

soft question - Which mathematicians have influenced you the most?

The graduate advisor at Queens College of the City University Of New York, Nick Metas, was and continues to be my greatest influence.



I first had a conversation on the phone with Nick over 15 years ago when I was a young chemistry major taking calculus and just becoming interested in mathematics. We spoke for over 3 hours and we were friends from that moment on.



It was Nick who indocrinated me into the ways of true rigor through his courses and countless conversations,and the equal cardinality of the stories he's told. Nick is a true scholar and my enormous knowledge of the textbook literature and research papers from the 1960's onward,I learned from Nick.My learned capacity for self-learning got me through the lean years at CUNY during my illnesses,when there wasn't much of a mathematics department there.



In relation to the reference to Gian-Can Rota above,I am Rota's mathematical grandson through Nick. Nick loved Rota and his eyes light up when he speaks of his dissertation advisor and friend from his student days at MIT. I hope someday there's someone famous I can feel that way about. But no one's influenced me more then Nick.



Nick's has been my friend and advisor for all things mathematical and he celebrated his 74th birthday yesterday quietly in his usual office hour,with dozens of students asking him for advice or just listening to his wonderful stories and jokes. Regardless of what happens,it will be Nick who's influence on me as a mathematician, student and mentor who's shaped me the most.

qa.quantum algebra - What does "quantization is not a functor" really mean?

A prop P is a symmetric monoidal category whose objects are the non-negative integers, with tensor product defined as addition of integers (so morally, it's a single object V, the unit object, and various powers of V; so it's the morphisms that make it interesting). A representation of prop P in a symmetric tensor category C is a symmetric tensor functor from P to C.



A prop is a way of encoding the morphisms defining a structure you are interested in, in a universal way. A representation of a prop P in a category C is then the same thing as a gadget modelled by P, in the category C. For instance see the prop Alg below; it's representations in C are the same thing as algebras in C. One nice thing about Props is that they can be given by generators and relations, and also they express via Schur functors $mathbb{S}_lambda$ defined in any symmetric tensor category similarly to the definition in $GL_n$ (if someone wants I can elaborate, but a search for Schur functors will probably yield enough results).



Examples include the prop "Alg": it has a morphism $u:1to V$, and a morphism $mu: Votimes Vto V$, and a relation $mucirc(muotimes id)=mucirc(idotimes mu)$, and a relation $mu(uotimes id)=mu(idotimes u)=id$, etc. So you formally take the symmetric tensor category on objects the non-negative integers, which admits maps like $mu$ and $u$, and you quotient by those relations. One can similarly define Lie-Alg, Lie-Bialg, Bi-alg, Hopf, quasi-Hopf,... you can go all day.



A key point is that morphisms between props induce pullback functors between their representations in any symmetric tensor category, but in the reverse order (meaning, as usual $rho:Pto S$ induces $rho^*:S-modto P-mod$. For instance, there is a morphism of props from Lie-Alg to Alg, sending the bracket $[,] mapsto m - taucirc m$. This induces the familiar forgetful functor from Alg to Lie-alg.



In situations where there is a quantum structure that is a quantization of a classical structure, you get two props defined over $k[[hbar]]$. For example in the case of Lie-Bialg, and Hopf alg, both of these make sense over k[[hbar]]. The classical limit functor is induced by a prop map from Bialg to Hopf alg. It turns out to have a section Hopf alg to Bialg, which is the sort of thing you can check by generators and relations. Of course it doesn't have a unique section, there are many choices. However, the fact that you make those choices ONCE and FOR ALL on this particular prop, then implies that you have a functor of quantization between these two structures in any symmetric tensor category. I think a moral is that asking some construction in symmetric tensor categories be functorial, which is in general a tricky business, can in certain instances be reduced to giving a single (not even necessarily canonical!) map of props.



In the case of Kontsevich quantization, The problem seems to be that quantizations are classified (up to isomorphism!) by HH^2(M) (if it's a Poisson manifold), and HH^2(A,A) of the algebra in general. Obstructions to quantization appear in HH^3. If, for instance HH^3(A,A) vanishes, it means you can quantize your Poisson algebra step-by-step. You basically start with your two-cocycle, it gives you a first order deformation, you try and see if your new algebra structure is associative so far (meaning write down associativity identity, expand in power of $hbar$, and examine the first place where it isn't trivially zero); this will yield a certain very explicit 3 co-cycle which you need to vanish. If that 3-cocycle vanishes in HH^3, then you get it as d(w) for some 2 co-cycle. This then gives you the next higher order deformation of the multiplication, on and on ad infinitum.



The problem here is that at each step you are making a choice of cocycle w such that dw=your previous obstruction. The results you get will be unique up to some non-canonical isomorphism, but that isn't good enough for functoriality. In particular, there is not a prop morphism between non-commutative algebras and Poisson algebras which would unify the approach for all examples.

tidal forces - Are there any known objects in a "dual" tidally locked orbit?

Pluto and its largest moon Charon are tidally locked to each other.




Charon and Pluto revolve about each other every 6.387 days. The two objects are both gravitationally locked to the other, so each keeps the same face towards the other. This is a mutual case of tidal locking . . .




Because of Charon's large size compared to Pluto, and because its barycenter lies above the primary's surface, some astronomers have referred to the Pluto/Charon system as a dwarf double planet.




The system is also unusual among planetary systems in that each is tidally locked to the other: Charon always presents the same face to Pluto, and Pluto always presents the same face to Charon: from any position on either body, the other is always at the same position in the sky, or always obscured.(110)


sg.symplectic geometry - Why (and whether) is any smooth embedded torus in R^4 isotopic to an embedded Lagrangian torus?

The question is pretty self-explanatory; we are dealing with the standard symplectic structure on ℝ4.



Some background: I'm reading the thesis "Lagrangian Unknottedness of Tori in Certain Symplectic 4-manifolds" by Alexander Ivrii, which proves that all embedded Lagrangian tori in ℝ4 are smoothly isotopic (and, in fact, Lagrangian isotopic). It uses lots of pseudoholomorphic curves. Obviously, if the question of this post is answered, together with the paper it will imply that all embedded tori in ℝ4 are smoothly isotopic (in other words, there are no torus knots in ℝ4).



I am told that this is, in fact, true, but that every proof that is known uses symplectic topology and Lagrangian tori. However, I have no idea how to do the question from the title, whether it's easy or hard, or whether it involves any pseudoholomorphic curves.

ag.algebraic geometry - A hands-on description of a "completion" of the free commutative monoid on countably many generators

Preliminaries



First part of your question doesn't use the bialgebra structure. That is, you have a space of functions on countable many points which I'll denote $A = mathbb C_1times mathbb C_2timescdots timesmathbb C_ntimescdots$ equipped with $+$ and $times$ pointwise. You would like to classify all algebra maps $varphi: Ato mathbb C$.



To start, note that idempotents must go to idempotents, of which $mathbb C$ has only two: $0$ and $1$. Moreover, $1times1timescdots mapsto 1$ (unless the whole map is trivial).



Classification of points



Consider characteristic functions $lambda_n$ which take value from point labeled $n$. As $lambda_n^2 = lambda_n$, we must have $varphi(lambda_n) = 0$ or $1$. There are three cases:



  • there are two indices $i, j$ such that $varphi(lambda_i) = varphi(lambda_j) = 1$. This is impossible: $0 = varphi(lambda_ilambda_j) = 1$.

  • there exists exactly one index $i$ for which $varphi(lambda_i) = 1$. This implies $varphi(f) = varphi((1-lambda_i)f + lambda_if) = f(i)$.

  • all functions $lambda_i$, and, therefore, all finite sums from $A$, lie in the kernel of $varphi$.

The maps of the latter type are indeed the "extra" points. Note, however, that they are "wild" and can be easily killed by some extra finitness assumptions, for example the following "limit of zeroes" property: if $varphi$ restricted to all finite subsets is 0, then $varphi$ is 0.



Wild maps



You can construct examples of these wild maps using the axiom of choice. To do that, make the elements of the form $atimes a times cdots$ and all their finite modifications to map to $a$; denote those elements $A_0$. Next, select an arbitrary $T_1in A$ which is transcendental over $A_0$ and map it to arbitrary $t_1in mathbb C$. This will fix all elements that lie in algebraic closure $A_1 = A_0(T_1)$. Do that again for $T_2$ and repeat until you have nothing left. At each step you're producing a correct map $A_n to mathbb C$, so you get the final map $Ato mathbb C$ as the limit of those.



Conversely, any wild map can be constructed by the above process, assuming all necessary set-theoretic things. So, the answer is, the wild points are classified by maps from this terribly non-constructive sequence of $T_i$ (of cardinality the same as $A$, that is, continuum) to $mathbb C$. Those are again in cardinality of continuum.



Monoid structure



One now is reminded that $A$ came with a natural basis enumerated by numbers $(n_1, n_2, dots, n_k, dots) in mathbb Z^+oplusmathbb Z^+opluscdotsoplusmathbb Z^+opluscdots$ (which is isomorphic to $mathbb Z_{>0}^times$). Therefore, it should be possible to add any two points (denoted $oplus$). The following properties are clear:



  • regular points add normally as $n_k = n_k' + n_k''$;

  • adding wild point to either regular or wild point results in a wild point.

Now, although you could write explicitly some wild points, nearly all of them are too hard to describe. The best approximation to the resulting picture is probably this: imagine a set of wild points $W$; the whole space is $W times mathbb Z^+times mathbb Z^+timescdots$. The remaining question, therefore, is what monoid $W$ is equivalent to. For that, you need to settle these questions:



  • is it true that you cannot have $w_1 oplus w_2 oplus cdotsoplus w_n = r$ where $r$ is a regular point;

  • whether you can subtract them;

  • whether and how wild points are divisible by naturals.

Operations on wild points



I think now is certainly time to post another question :)

st.statistics - Is there a text on estimation theory online?

I was referred to this text:
Hogg/Craig, Introduction to mathematical statistics. Prentice-Hall
After browsing through a bit I found it to be not so suitable and often garbled.



UPDATE
And here is one which fit my needs better:
Kay S.M. Fundamentals of statistical signal processing: estimation theory

fa.functional analysis - Asymptotic non-distortion of the separable Hilbert space

By the work of E. Odell and Th. Schlumprecht, we know that the
separable Hilbert space $ell_2$ is arbitrarily distortable. But
I don't know if an "asymptotic" version of their result is true.
To state my question, let me introduce the following terminology:



Let $(X, | cdot |)$ be a separable Banach space with a normalized
Schauder basis $(e_n)$ and $C geq 1$. Let us say that $X$ is
asymptotically non-distortable with constant $C$ (and with respect to
the basis $(e_n)$ of $X$) if for every equivalent norm $| cdot |$ on $X$
there exists a semi-normalized block sequence $(v_n)$ of $(e_n)$ such that
for every $k$, every $k leq n_1 < ... < n_k$ and every pair $x$ and $y$
of vectors in the span of ${ v_{n_1}, ..., v_{n_k} }$ with
$|x| = |y| = 1$ we have $|x| / |y| leq C$.



Question: does there exist $Cgeq 1$ such that the separable Hilbert
space $ell_2$ is asymptotically non-distortable with constant $C$ and with
respect to its standard unit vector basis? IF this is true, then can we take
$C$ to be $1+epsilon$ for every $epsilon > 0$?



Of course, a similar question can be asked for a general Banach space
with a Schauder basis. I think that every asymptotic $ell_1$ space
is asymptotically non-distortable for some $Cgeq 1$, and for Tsirelson's
space we can take $C$ to be $2+epsilon$ for every $epsilon > 0$.
Let me recall that there exist arbitrarily distortable asymptotic
$ell_1$ spaces.

speed - How fast is a comet moving when it crosses Earth's orbit?

There are relatively big varieties, but most of them is between 10 and 70 km/s.



If a comet is a periodic comet, that means it needs to have an elliptic orbit around the Sun. That gives an upper limit to its speed of the escape speed from the solar system on the orbit of the Earth. That is around 40 km/s.



But this 40 km/s is in the reference frame of the Sun. The Earth is moving in this reference frame with around 30 km/s, on a nearly circular orbit.



Between the escape speed and the mean speed of a circular orbit there is always a $sqrt{2}$ relation. It is a physical law.



Theoretically it were possible to find extrasolar comets (if the speed of it were bigger as around 70 km/s, it were a clear signature of its remote origin), but they aren't coming.

Friday, 28 June 2013

ag.algebraic geometry - Serre's FAC in English

Has somebody translated J.-P. Serre's "Faisceaux algébriques cohérents" into English? At least part of it?



In a fit of enthusiasm, I started translating it and started TeXing. But after section 8, I got tired and stopped.



However if somebody else already took the trouble, I would be most grateful. I do not know a word of French(except maybe faisceau), and forgot whatever I learned in the process of translation very quickly.



This is made community wiki, as I do not want to get into rep issues. Please feel free to close this if you think this qn is inappropriate for MO(I have added my own vote for closing, in case this helps). I would be happy to receive answers in comments.

simplicial stuff - Ambiguous definition of "nerve of an open covering" on wikipedia?

Let $(U_i)_{iin I}$ be an open covering of a topological space $X$.



At http://en.wikipedia.org/wiki/Nerve_of_an_open_covering,
the nerve of the open covering is defined as follows:




the nerve $N$ is the set of finite subsets of $I$ defined as follows:



  • the empty set belongs to $N$;

  • a finite set $Jsubset I$ belongs to $N$ if and only if the intersection of the $U_i$ whose subindices are in $J$ is non-empty.



On the other hand, http://en.wikipedia.org/wiki/Nerve_(category_theory) states:




If $X$ is a topological space with open cover $U_i$, the nerve of the cover is obtained from the above definitions by replacing the cover with the category obtained by regarding the cover as a partially ordered set with relation that of set inclusion.




Here, "the above definitions" refers to the usual construction of the nerve of a category: A vertex for each object, and a $k$-simplex for each $k$-tuple of composable morphisms.



My question is: Does this categorical construction really yield the previously defined nerve of the open covering?



For instance, cover the inverval by two intersecting invervals non of them containing the other one. Then it seems to me that the first construction yields two vertices connected by an edge, while the second construction yields to bare vertices.



What am i missing?



If the second definition is indeed wrong, what is the right way to obtain the nerve of an open covering as a special case of the nerve of a category?

planet - How do proto-planetary nebulae gain momentum?

The angular momentum comes from the chaotic/turbulent collapse of the parent molecular cloud. The collapse is turbulent because the initial conditions in the parent cloud are not spherically symmetric. This leave the proto-stellar clumps with non-zero angular momentum even if the parent cloud started with zero angular momentum. The sum of the angular momenta of all the material of the original cloud after fragmentation is still that of the cloud before fragmentation.



Some other/external triggering events for molecular cloud collapse can impart non-zero angular momentum to the cloud as a whole, but this is not necessary for the the individual fragments to end up with angular momentum.

Thursday, 27 June 2013

computational geometry - What is the most general class of metric spaces for which the closest pair of points in a finite subset can be found in time O(n^(1+eps))?

I assume you are aware of the classic paper by Jon Bentley,
"Multidimensional divide-and-conquer"
[Commun. ACM 23(4):214-229 (1980)],
in which he showed how to find the closest pair of points in $mathbb{R}^3$
in the Euclidean metric in $O(n log n)$ time.
His algorithm works in arbitrary dimensions in $O(n log^{d-1} n)$.
I realize I am not answering your question about metric spaces, but it might be worth revisiting
his algorithm to see how heavily it leans on the norm.



Rabin's 1976 randomized algorithm achieves $O(n)$ expected time.
An updated detailed analysis is in the paper
"A Reliable Randomized Algorithm for the Closest-Pair Problem"
by Martin Dietzfelbinger, Torben Hagerup, Jyrki Katajainen, and Martti Penttonen
[Journal of Algorithms 25(1): 19-51 (1997)].
Again I am not addressing your focus on other metric spaces, but these efficient algorithms
for Euclidean distance would be a place to start.

the sun - How stable are Lissajous orbits?

Now that the Gaia Space Telescope is on it's way to the Sun-Earth L2 Lagrangian point (SEL2), I start wondering about the stability of Gaia's orbit there. The Planck Telescope is already there, as was Wilkinson Microwave Anisotropy Probe (WMAP) and other probes, and from Wikipedia I learned that:




In practice, any orbit around Lagrangian points L1, L2, or L3 is
dynamically unstable, meaning small departures from equilibrium grow
exponentially over time.




Gaia has some kind of Orbital Maneuvering System (to borrow a Space Shuttle term) and some propellant on board, so has Planck, however I wonder how deterministic these orbits are and if both Planck and Gaia have automatic corrections and collision detection in their flight computers; L2 is "only" 1.5 million km (or about 5 light-seconds) away so surely there's time for manual correction.



Does anyone know a source which tells how different Gaia's and Planck's orbits are, if there are intersections between their orbital planes or even how likely the need for an unplanned orbital correction is? I know Lissajou-shapes from maths classes and I know how much the projected trail can differ depending on the precision of data types used in calculations (e.g. float vs. double). How does ESA/NASA handle this, now that SEL2 seems it will become a crowded place?

rt.representation theory - Relation between group representations and elements of group cohomology groups

Having already seen group cohomology, I was just introduced to the formula $U otimes Ind W = Ind(Res(U) otimes W)$ from representation theory. This seems oddly like the formula $mathrm{Cor}(u) cup v = mathrm{Cor}(u cup mathrm{Res}(v))$, which can be found as Proposition 1.39 in Chapter 2 of Milne's CFT Notes. Can one be proven from the other? In one case, $U, W$ are actual modules, whereas in the other case, $u,v$ are elements of modules. Maybe this means that certain $G$-modules might somehow classify representations, and the cup product would represent the tensor product of representations?

Wednesday, 26 June 2013

terminology - What to call the elements of a tensor product.

This issue is similar to what someone faces when dealing with a polynomial expression
$$
c_nalpha^n + c_{n-1}alpha^{n-1} + cdots + c_1alpha + c_0
$$
where $alpha$ actually satisfies an equation of degree smaller than $n$. Logically speaking such expressions can be written in multiple ways (consider a quartic polynomial expression in $sqrt{3}$), but nobody has a problem speaking about the $i$th term in the expression.



Just do the same thing when you write down an elementary tensor $v_1 otimes v_2 otimes cdots otimes v_k$: call $v_i$ the $i$th component (or $i$th term, or perhaps even the $i$th factor). Now comes an issue of who your audience is (which you didn't indicate). If your audience is experts, then it would be clear to your audience that whatever you're doing with $v_i$ is eventually leading to some well-defined result in terms of the tensor itself, so there's nothing more to say.



If your audience is students, to whom the tensor product is still somewhat new, then be sure to remind them that mathematically an elementary tensor does not have well-defined components, since an elementary tensor could be written as an elementary tensor in multiple ways. You might then mention the example of polynomial expressions as above which could be written in multiple ways, as an analogy.

fields - What is it called if a vector space doesn't have an additive inverse?

so, you have, for any two members of the algebraic structure A and B and any nonnegative real values a, b:



two operations: * and +, such that



a*A + b*A = (a+b)*A is in the structure



A + B = B + A is in the structure



0*A + B = B



but there is no guarantee that X s.t.



X + A = B



is in the structure.




As an example, the set of 2-dimensional Cartesian vectors that are in the first quadrant (i.e., x>=0 and y>=0) has the properties that I want. You can add them, scale them, but if you try to subtract them, you might leave the first quadrant.



Thanks very much!

at.algebraic topology - Attaching maps for Grassmann manifolds

The Grassmannian $G_n(mathbb{R}^k)$ of n-planes in $mathbb{R}^k$ has a CW-complex structure coming from the Schubert cell decomposition.



What is known about the attaching maps in this CW-complex structure?



I understand that a lot of work has been done to try to understand the answer to this question using things like Schubert calculus, Young diagrams, Steenrod operations, etc. I'd like to see some kind of collection of known results about the attaching maps and the specific techniques used to obtain those results.



I'm also interested in the case of the complex Grassmannians.

Monday, 24 June 2013

solar system - How did Meeus calculate equinox and solstice dates?

In Astronomical Algorithms (2nd ed, ch. 27, 2009 corrected printing) Jean Meeus gives expressions to calculate the date and time (dynamical time, equivalent to Terrestrial Time) of equinoxes and solstices from the year -1000 to the year +3000. The expressions are accurate to 51 seconds or better for the years 1951-2050. First what Meeus calls the "instant of the 'mean" equinox or solstice" is calculated using a fourth degree polynomial; there are 8 expressions. There are different expressions for each solstice or equinox, and different expressions for the year ranges -1000 to 1000 vs. 1000 to 3000. Then two corrections are applied; the corrections are calculated the same way no mater which time period or equinox or solstice is being corrected. The first step is to calculate:



$$T = frac{(text{mean JD of event} - 2451545.0)}{36525}$$



$$W = 35999.373°T - 2.47°$$



$$Delta lambda = 1 + 0.334 cos W + 0.007 cos 2 W$$



Next, an additional correction is computed involving 24 periodic terms with various periods.



Can anyone explain, in general terms, how Meeus derived these expressions? I'm especially interested in understanding what the "mean" value represents?

pr.probability - Can you explain a step in an expectation maximization algorithm in a Nature article?

These numbers are the normalized likelihoods that the results given in the 10 toss vector
are obtained from the current distributions the coin A (or respectively B).



I'll work out the first two rows for illustration:



The guessed Bernoulli parameter for type A is 0.6 and for type B is 0.5.
According to the binomial distribution formula,
the unnormalized likelihood for obtaining 5H 5T are
From A:



L_A = C(10,5)(0.6)^5(0.4)^5



where C(10,5) is the binomial coefficient 10!/5!5!



Similarly from B we obtain:



L_B = C(10,5)(0.5)^5(0.5)^5



The normalized likelihoods are obtained as



For A: L_A/(L_A+L_B) = 0.4491



For B: L_B/(L_A+L_B) = 0.5509



For the second case 9H 1T



L_A = C(10,9)(0.6)^9(0.4)^1



L_B = C(10,9)(0.5)^9(0.5)^9



The normalized likelihoods:



For A: L_A/(L_A+L_B) = 0.8050



For B: L_B/(L_A+L_B) = 0.1950

Saturday, 22 June 2013

rt.representation theory - Do Jones-Wenzl idempotents lift to anything interesting in the Hecke algebra?

Background



Inside the Temperley-Lieb algebra $TL_n$ (with loop value $delta=-[2]$ and standard generators $e_1,ldots,e_{n-1}$), the Jones-Wenzl idempotent is the unique non-zero element $f^{(n)}$ satisfying
$$ f^{(n)}f^{(n)} = f^{(n)} quad textrm{and} quad e_i;f^{(n)} = 0 = f^{(n)}e_i quad textrm{for each } i.$$
Consider the Iwahori-Hecke algebra $mathcal{H}_n$, $nge3$, normalized so that $(T_i-q)(T_i+q^{-1})=0$, where $q$ is generic. Let $mathcal{I}$ be the two-sided cellular ideal generated by canonical basis element
$$C_{121} = T_1T_2T_1-qT_1T_2-qT_2T_1+q^2T_1+q^2T_2-q^3.$$
The assignment $mathcal{H}_n rightarrow TL_n$ given by $T_i mapsto e_i + q$ is a surjective $mathbb{C}(q)$-algebra homomorphism with kernel $mathcal{I}$.



We can lift the generators $e_i$ in $TL_n$ to the Kazhdan-Lusztig elements $C_i=T_i-q in mathcal{H}_n$. In fact, we have $C_{121} = C_1C_2C_1 - C_1$, hence the relation down below. Rescaling a bit, $E=-frac{1}{[3]!}C_{121}$ is an idempotent, corresponding to the partition $(1,1,1)$. Actually, all of the primitive idempotents in $mathcal{H}_n$ that correspond to Young diagrams with more than two rows live in the ideal $mathcal{I}$.



Now, any preimage of $f^{(n)}$ in the Hecke algebra (call it $F^{(n)}$) satisfies
$$F^{(n)}F^{(n)} equiv F^{(n)} quad textrm{and} quad C_iF^{(n)} equiv 0 equiv F^{(n)}C_i quad (operatorname{mod} mathcal{I})$$



Question




Can we choose $F^{(n)}$ to be an idempotent in $mathcal{H}_n$?




When $n=2$, the map is an isomorphism and we have no choice.
$$F^{(2)} = frac{1}{[2]}(T_1+q^{-1}),$$
which projects onto the $q$-eigenspace for $T_1$. In other words, it is the idempotent corresponding to the partition $(2)$.

Friday, 21 June 2013

orbit - How to convert a gravitational force to speed and direction

You must apply Newton's law
$$ mfrac{mathrm{d}boldsymbol{v}}{mathrm{d}t} = boldsymbol{F}$$
which related the force $boldsymbol{F}$ to the acceleration (=change of velocity). Note that positions, velocities, and forces are all vector quantities. Also note that as the objects move (change position), their mutual forces change (both direction and strength).



There is an enormous amound of literature on the resulting orbits for celestial bodies, going all the way back to Kepler and Newton. I suggest you look into elementary books or start with some Wikipedia articles.

mathematical modeling - Which way is the right way to calculate auto correlation function

I found the following paper, which deals specifically with artifacts when autocorrelation should be determined from a set of several finite/short time courses instead of one long time course:



"Some Effects of Finite Sample Size and Persistence on Meteorological Statistics. Part I: Autocorrelations" Kevin E. Trenberth



http://www.cgd.ucar.edu/staff/trenbert/trenberth.papers/i1520-0493-112-12-2369.pdf



It is pretty straightforward in terms of the artifacts from short sampling times, and also how to get rid of them. The paper does not have much higher math in it (which I like, others maybe not), but it is widely cited. I found it more obvious how to correct that artifacts from this publication than some other, "proper math" papers.



In essence, the problems are brought about by the fact that the time series can be too short to contain the full range of fluctuations as well as the full persistence of these fluctuations. It leads to an underestimation of the autocorrelation function.



For a set of $N$ time courses $x_n(i)$, ($n=1,dots,N$, $i=1,dots,I$), that supposedly stem from the same process, you first calculate the whole sample mean,
$overline{x} = frac{1}{NI}{sum_{n=1}^N left( sum_{i=1}^Ix_n(i) right)}$.



Next, you calculate the autocorrelation not normalized autocorrelation function, for every individual time course $n$ and lag number $k$ ($0leq kleq I$), $tilde{a}_n(k) = langle (x_n(j))(x_n(j+k)) rangle_{j=1,dots,I-k}$.



The normalized autocorrelation of each individual sample $x_n$ for a lag of $k$ now is $a_n(k) = tilde{a}_n(k)/tilde{a}_n(0)$.



Finally, the corrected autocorrelation function of the process that was repeatedly sampled in the different time courses $x_i$ is the average of all individual autocorrelation functions, $a(k) = langle a_n(k) rangle_{n=1,dots,N}$.



It might not help the person who wrote the original question, but others searching for the same thing.

star - Do solar systems have to evolve in a galaxy?

Evidence for stars evolving alone outside galaxies is very hard to get. And for a good reason: those stars are very hard to detect.



Galaxies are quite easy to observe: one of the reasons for that, is that the surface brightness of a galaxy does not change with the distance (a good explanation here: http://mysite.verizon.net/vze55p46/id18.html, 5th paragraph) (Note: this is not true anymore for cosmological distances because the expansion plays a role, but this demonstration is usefull for the nearby universe in which we focus now).



However, observing stars is very hard as they do not have a surface brightness: the light we receive evolves in 1/radius2. Do stars outside galaxies exist? Yes: we have evidence for intergalactic stars. See here for example: http://hubblesite.org/newscenter/archive/releases/1997/02/text/ which I think was the first discovery (anyone can back me up?). Or see this more recent paper about "rogue stars" evaded from our Galaxy: http://lanl.arxiv.org/abs/1202.2152. Those findings are not surprising as we were expecting that some stars are sometimes ejected from galaxies by gravitational forces. Can these stars carry on their planetary system with them? On this question I am not an expert, so I won't give an answer.



That was for stars created inside galaxies. What about star formation outside galaxies? This is a recent subject of discussion but we have growing evidence that this could be possible. Here is a paper about the discovery of intergalactic HII regions, which are regions of star formation outside galaxies: http://arxiv.org/abs/astro-ph/0310674.



Therefore we showed that:
1/ there are stars outside galaxies
2/ there is evidence for star formation outside galaxies.



In the first case, it may be possible that these stars carry their planetary systems with them. In the second case, it sounds plausible that the forming stars will build their torque of material which might give birth to planetary systems. There is however no evidence yet for this (to my mind).



I hope my answer clarifies your mind!

Thursday, 20 June 2013

set theory - Characterizations of non-wellfounded models?

There is a large body of work studying ill-founded models of set theory. The goal is to provide a robust model theory for models of set theory, usually focussing on the countable models. Much of this theory grows out of the study of nonstandard models of arithmetic,
and many tools and theorems from models of arithmetic generalize to the study of models of ZFC.



Let me give a few examples. If M is a model of ZF, one can form the standard system of M by looking at the trace on the standard ω of all the reals of M. It is easy to see that Ssy(M) is a Boolean algebra, closed under Turing reducibility and if T is an infinite binary tree coded in Ssy(M), then there is a path through T coded in Ssy(M). Any set of reals with those three properties is called a Scott set, in honor of Dana Scott, who proved the following amazing characterization:



Theorem.(Scott) If ZFC is consistent, then every countable Scott set arises as the standard system of a model of ZFC.



Scott's theorem is usually stated for models of PA, but the proof for ZFC is identical. It remains a big open question whether Scott's theorem holds for all uncountable Scott sets. The answer is known for Scott sets of size ω1, and so under CH the problem is solved, but it remains open when CH fails.



A key definition is that a model M of ZFC is computably saturated if it realizes every finitely consistent computable type over M. All such models are ω-nonstandard. It turns out that M is computably saturated if and only if it is (isomorphic to) a model that is an element of an ω-nonstandard model of ZFC. Furthermore, these models have interesting closure properties.



Theorem. Any two countable computably saturated models of ZFC with the same standard system and same theory are isomorphic.



Theorem. Every countable computably saturated model M of ZFC is isomorphic to a rank initial segment Vα of itself.



Much of the analysis of models of PA, such as that in the book by Jim Schmerl (UConn) and Roman Kossak (CUNY) extends to models of ZFC. Ali Enayat has also done a lot of interesting work along these lines.



Here is another interesting theorem having to do with nonstandard ZFC models. Let ZFC* be any finite fragment of ZFC. If there is a (very small) large cardinal, then one can use full ZFC in this theorem (e.g. it suffices is there some uncountable θ with Lθ satisfying ZFC). The theorem is interesting in the case that there are nonconstructible reals.



Theorem. Every real x is an element of a model of ZFC*+V=L. Furtheremore, one can find such a model whose ordinals are well-founded at least to α, for any desired countable ordinal α.



Proof. First, the statement of the theorem is definitely true in L, since every real in L is in some large Lθ. Second, the complexity of the statement is Σ11 in x and a real coding α. Thus, by Shoenfield's Absoluteness theorem, it is true in V. QED



Thus, even when x is non-constructible, it can still exist in a model of V=L! This is quite remarkable. (This theorem was shown to me by Adrian Mathias, but I'm not sure to whom it is originally due.)

ac.commutative algebra - Trace map attached to a finite homomorphism of noetherian rings

This is really a response to Karl's beautiful example; I'm posting it as an "answer" only because there isn't enough room to leave it as a comment.



The condition on conductor ideals is one that I had come across by thinking about the dual picture.
Namely, let $f:Yrightarrow X$ be a finite map of 1-dimensional proper and reduced schemes over an algebraically closed field $k$. Then $Y$ and $X$ are Cohen-Macaulay by Serre's criterion, so the machinery of Grothendieck duality applies. In particular, the sheaves
$f_*O_Y$ and $f_*omega_Y$ are dual via the duality functor $mathcal{H}om(cdot,omega_X)$, as are
$O_X$ and $omega_X$. Here, $omega_X$ and $omega_Y$ are the ralative dualizing sheaves of
$X$ and $Y$, respectively. Thus, the existence of a trace morphism $f_*O_Yrightarrow O_X$
is equivalent by duality to the existence of a pullback map on dualizing sheaves $omega_Xrightarrow f_*omega_Y$.



In the reduced case which we are in, one has Rosenlicht's explicit description of the dualizing sheaf: for any open $V$ in $X$, the $O_X(V)$-module $omega_X(V)$ is exactly the set of
meromorphic differentials $eta$ on the normalization $pi:X'rightarrow X$ with the property that $$sum_{x'in pi^{-1}(x)} res_{x'}(seta)=0$$
for all $xin V(k)$ and all $sin O_{X,x}$.



It is not difficult to prove that if $C$ is the conductor ideal of $X'rightarrow X$
(which is a coherent ideal sheaf on $X'$ supported at preimages of non-smooth points in $X$),
then one has inclusions
$$pi_*Omega^1_{X'} subseteq omega_X subseteq pi_*Omega^1_{X'}(C).$$
Since $X'$ and $Y'$ are smooth, so one has a pullback map on $Omega^1$'s, our question
about a pullback map on dualizing sheaves boils down the following concrete question:




When does the pullback map on meromorphic differentials $Omega^1_{k(X')}rightarrow pi_*Omega^1_{k(Y')}$ carry the subsheaf $omega_X$ into $pi_*omega_Y$?


By looking at the above inclusions, I was led to conjecture the necessity of conductor ideal containment as in my original post. As Karl's example shows, this containment is not sufficient.
Here is Karl's example re-worked on the dual side:



Set $B:=k[x,y]/(xy)$ and $A:=k[u,v]/(uv)$ and let $f:Arightarrow B$ be the $k$-algebra map
taking $u$ to $x^2$ and $v$ to $y$. Writing $B'$ and $A'$ for the normalizations, we have
$B'$ and $A'$ as in Karl's example, and the conductor ideals are $(x,y)$ and $(u,v)$.
Now the pullback map on meromorphic differentials on $A'$ is just
$$(f(u)du,g(v)dv)mapsto (2xf(x^2)dx,g(y)dy).$$
The condition of being a section of $omega_A$ is exactly
$$res_0(f(u)du)+res_0(g(v)dv)=0,$$
and similarly for being a section of $omega_B$. But now we notice that
$$res_0(2xf(x^2)dx)+res_0(g(y)dy) = 2 res_0(f(u)du) + res_0(g(v)dv) = res_0(f(u)du)$$
if $(f(u)du,g(v)dv)$ is a section of $omega_A$. Thus, as soon as $f$ is not
holomorphic (i.e. has nonzero residue) the pullback of the section
$(f(u)du,g(v)dv)$, as a meromorphic differential on $B'$, will NOT lie in the subsheaf $omega_B$.



Clearly what goes wrong is that the ramification indices of the map $f:A'rightarrow B'$
over the two preimages of the nonsmooth point are NOT equal. With this in mind, I propose the following addendum to my original number 4):




In the notation of 4) above and of Karl's post, assume that $f'(C_A)=C_B^e$
for some positive integer $e$. Then the trace map $B'rightarrow A'$ carries $B$
into $A$.


Certainly this rules out Karl's example. I think another way of stating the condition is that the map $f':Spec(B')rightarrow Spec(A')$ should be "equi-ramified" over the nonsmooth locus of $Spec(A)$, i.e. that the ramification indices of $f'$ over all $x'in Spec(A')$ which map to the same nonsmooth point in $Spec(A)$ are all equal.



Is this the right condition?

Tuesday, 18 June 2013

order theory - Semilattices in atomless boolean algebras

The answer to the original question is that no, in fact we can never do this.



Theorem. No nontrivial Boolean algebra has a cofinal
subset of B-{1} that is a join-semilattice. Indeed, B cannot have a cofinal subset of B-{1} that is even upward directed.



Proof. Suppose that B is a nontrivial Boolean algebra and S is cofinal subset of B-{1}. Let b be any element of B that is neither 0 nor 1. Thus, -b is also not 1, so both b and -b must have upper bounds in S. If S is a join-semilattice, or even only upward directed, than there will be a single element of S above both b and -b. But the only such upper bound in B is 1, which is not in S. QED



This is true whether or not B is atomic. In the atomic case, it is particularly easy to see, since the co-atoms have no upper bound in B-{1}.



There is no need in the question to insist that the embedding of S into B is a semilattice embedding, since even just an order-preserving map would give an upward directed image, which the theorem rules out.



The dual version of the question, turning the lattice upside down, is the corresponding fact that no nontrivial Boolean algebra has a filter that is also dense.



(Note: I didn't know what sense of "bounded" you meant in
the first part of the question, so I ignored that...Does this matter?)




The situation with the revised version of the question is somewhat better, since as the questioner points out, there are now some examples of the phenomenon. But nevertheless, there are also counterexamples, as I explain, so the answer is still no.



The reason there are counterexamples is that the new conditions on S do not rule out the possibility that S is upward directed. Thus, the same obstacle from above still occurs. For example, any linearly ordered S is a meet-semilattice and a join-semilattice, so it can never be cofinal in a Boolean algebra. More generally, if you take any S you have in mind, and put a copy of the natural numbers on top of it (above all elements), then this will still satisfy your conditions, but since it is also upward directed, it cannot embed cofinally in a Boolean algebra.

What causes jets from newly born stars?

Did you get this from Wikipedia? If you did, I sympathize; all the entries related to this give no explanation for the phenomenon. If you didn't go there . . . Well, it won't answer your question.



But I think I've found the answer somewhere else. I'll go over some background stuff for anyone out there who doesn't know much about this, to make it a bit clearer (although you should be able to skip all this). Protostars are small pre-pre-main-sequence objects. At first, they aren't more than small clumps of gas and dust in a stellar nursery (a special nebula, perhaps). Soon, though, more matter begins to come towards a clump, and gradually the clump builds up enough matter that it begins a slight gravitational collapse into a definite object. Temperatures build, and eventually the object becomes a pre-main-sequence star.



These pre-main-sequence stars begin to accrete matter (here's where you should stop dozing off) into a circumstellar disk. At the moment, it doesn't matter if the disk will become a protoplanetary disk or something completely different; for now, we can treat it as a circumstellar accretion disk. Matter "falls" into the disk and towards the star; the star absorbs most of it. However, something else comes into play, and that's the star's magnetic field.



A lot of objects in the universe have magnetic fields, and a young pre-main-sequence star is no exception. It's field may be stronger than that of the Earth - so strong, in fact, that some of the matter in the circumstellar accretion disk follows slightly different paths than it normally would - paths along the field lines of the star's magnetic field. While there isn't enough evidence (scarcely any at all, really) to support or, more importantly, disprove this idea, it is though that the magnetic field lines lead the matter towards a point on the star, perhaps at its poles, and then compress it together into an astrophysical jet (or bipolar outflow) that shoots out into space. Such jets are often associated with other cool phenomena, such as Herbig-Haro objects.



So for everyone who considers this post a "Too long; didn't read" answer, here's a summary:



The truth is that nobody really knows. The current explanations for this phenomenon are that they are caused by matter from accretion disks being concentrated by magnetic field lines onto a single (well, double, one on either end of an axis) point, which then becomes an astrophysical jet.



I hope this helps.




Note: This paper, specifically the second page, was extremely helpful. Other pages that might interest you (or provide more information for those who are curious about this and related topics) are:



Protostar



Herbig Ae/Be star



Herbig-Haro object



Paper #1



Paper #2



Pre-main-sequence star



Circumstellar disk



Astrophysical jet



Bipolar outflow



Softpedia.com



Sci-news.com



IOP

at.algebraic topology - Whitehead Products without Base Points?

Let $(X, x_0)$ be a pointed space. Then we can define the homotopy groups $pi_i(X, x_0)$ for $i geq 1$. They are abelian groups for $i geq 2$. It is well-known that the fundamental group $pi_1(X, x_0)$ acts on each of the higher groups $pi_i(X, x_0)$, and that this action generalizes to the Whitehead Products which are maps



$$ pi_p(X, x_0) times pi_q(X, x_0) to pi_{p+q -1}(X, x_0).$$



The details are given in the wikipedia article I linked to above. Together the Whitehead products turn the graded group $pi_*(X, x_0)$ (for $* > 0$) into a graded (quasi-) Lie algebra over $mathbb{Z}$, where the grading is shifted so that $pi_i(X, x_0)$ is in degree $(i-1)$. Well, it is a little funny since the bottom group is not necessarily abelian.




This is all well and good, but what if we don't want to pick base points? Is there a similar algebraic gadget in that situation?




If we don't pick base points, then it seems natural to consider the fundamental groupoid $Pi_1(X)$. Then the different homotopy groups of $X$ at different base points can be assembled into local systems on $X$. That is for each $i geq 2$ we have a functor,



$$pi_i: Pi_1 X to AB$$



where $AB$ is the category of abelian groups. This already incorporates the action of $pi_1$ on the higher homotopy groups but does it in a way which doesn't depend on the choice of base point.




Question: Can we enhance these local systems with a structure which generalizes the Whitehead product, and if so what precisely is this extra structure?


Monday, 17 June 2013

gn.general topology - Lebesgue measure of boundary of Caccioppoli set

The answer is no. Take countably many disjoint closed balls $B_i$ contained in the square $Q=[0,1]times [0,1]$ and such that:
(i) Sum of areas of $B_i$ is less than 1
(ii) Sum of perimeters of $B_i$ is finite
(iii) $bigcup B_i$ is dense in $Q$
Since the series $sum chi_{B_i}$ converges in BV norm, the set $E=Qsetminus bigcup B_i$ has finite perimeter. It also has positive measure and empty interior. Any representative $F$ of the set $E$ also has empty interior and therefore $partial F$ is not Lebesgue null.




By the way, any Lebesgue measurable set E has a representative F with the property



(*) $0<|Fcap B(x,r)|<|B(x,r)|$ for all $xinpartial F$ and all $r>0$.



The proof is straightforward: add the points x for which $|Ecap B(x,r)|=|B(x,r)|$ for some r, and throw out all points x such that $|Ecap B(x,r)|=0$ for some $r$. (See Prop. 3.1 in "Minimal surfaces and functions of bounded variation" by E. Giusti.) By virtue of (*) the set $F$ has the smallest (w.r.t inclusion) topological boundary among all representatives of $E$, so if this representative doesn't help you, nothing does.

orbit - How did Kepler "guess" his third law from data?

It is amazing that Kepler determined his three laws by looking at data, without a calculator and using only pen and paper. It is conceivable how he proved his laws described the data after he had already conjectured them, but what I do not understand is how he guessed them in the first place.



I will focus in particular on Kepler's third law, which states that the square of the orbital period of a planet is proportional to the cube of the semi-major axis of the orbit.



I assume that Kepler was working with data about the planets only, plus our own moon, and the sun. I make this assumption because I don't think Kepler had data about other moons, comets, or asteroids, which had not been observed by telescope yet. If this is true, knowing that Neptune, Uranus, and Pluto were not yet discovered when Kepler was alive, this means Kepler had less than 9 data points to work with.



My friend claims that it is totally concievably how Kepler guessed this relationship (although he provides no method of how Kepler might have done it), and also that Kepler's observations are "not that hard". As a challenge, I gave my friend a data table with one column labeled $x$, the other $y$, and 9 coordinates $(x,y)$ which fit the relationship $x^4=y^3$. I said "please find the relationship between $x$ and $y$", and as you might expect he failed to do so.



Please explain to me how in the world did Kepler guess this relationship working with so few data points. And if my assumption that the number of data points Kepler had at his disposal is small, is wrong, then I still think its quite difficult to guess this relationship without a calculator.

Saturday, 15 June 2013

set theory - Two questions about the boolean algebra $P(kappa)/Cub^*$

I have two questions based on exercises in Kunen's set theory. Let $kappa = cf(lambda) > omega$. Why is there a c.u.b. $C subseteq lambda$ of order type $kappa$? I thought we just choose $C$ as the image of an increasing unbounded function $kappa to lambda$, but I doubt that this has to be closed.



Also, why can we use this $C$ to get an isomorphism of boolean algebras $P(kappa)/Cub^*(kappa) cong P(lambda)/Cub^*(lambda)$? If we just pull back with $kappa to lambda$, I don't see why this will be well-defined. Note that $Cub^*$ is the ideal of non-stationary subsets.



The second problem is the following: Let $kappa$ be an uncountable regular cardinal. I want to prove that there is a decreasing sequence of stationary sets $S_alpha, alpha < kappa$, whose diagonal intersection is ${0}$. This is an exercise in Kunen's set theory, and there is a hint that one should use the preceding exercise, which says that the boolean algebra $B=P(kappa)/Cub^*(kappa)$ has infima indexed over $kappa$, which correspond to the diagonal intersection in $P(kappa)$.



Here's what I've done so far: Construct a decreasing sequence in $B$: Let $x_0=1$. If $x_alpha$ is already defined, define $x_{alpha+1} = x_alpha$ if $x_alpha$ is minimal and otherwise choose some $x_{alpha+1} < x_alpha$. If $alpha$ is a limit and $x_gamma$ is defined for all $gamma < alpha$, let $x_alpha$ be the infimum of these $x_gamma$.



Now if $x_alpha = [S_alpha]$, then $S_alpha$ is stationary iff $x_alpha neq 0$; is this the case? For $alpha < beta < kappa$, we have $x_beta leq x_alpha$, i.e. there is a c.u.b. $C_{alpha,beta}$ such that $S_beta cap C_{alpha,beta} subseteq S_alpha$. Now perhaps there is some double-index diagonal intersection $C$ of these $C_{alpha,beta}$ (?) which is c.u.b. again and which we may intersect with every $S_alpha$, so that we may assume $S_beta subseteq S_alpha$, as desired.



Finally we have to ensure the infimum of the $x_alpha$ is $0$. I wonder if this is true at all with this naive construction.

star - Is Nomad data heliocentric or geocentric?

I'm wondering if the data in star catalogues are heliocentric equatorial coordinates or geocentric equatorial coordinates. Also given the distance between stars compared to the distance between the sun and the earth, would this be a big deal to mix up the two?



http://en.wikipedia.org/wiki/Celestial_coordinate_system

atmosphere - Are there many faint meteors that are too faint to see with the naked eye?

The answer is yes. The mass (and hence brightness) of meteorites follows some sort of power-law relationship such that there are many more small particles than big lumps of rock. This means that there are indeed many more faint meteorites than bright ones, though I am struggling to find any detailed study that goes below the magnitude range you can see with the naked eye. This paper by Kresakova (1966) seems thorough and representative. It suggests that the number of meteorites goes up by a factor of 3 for each unit increase in astronomical magnitude (i.e. decreasing brightness).



The paper referred to by Barry Carter in his comments is Cook et al. (1980), which finds that log of the cumulative number of meteors is proportional to about half the astronomical magnitude. i.e.
$$frac{d log phi}{dm} simeq 0.5 $$
This means that the number of meteorites increases by a factor of $10^{0.5}$ for each unit increase in magnitude - i.e. also about 3.



It is a separate issue as to whether you see more with night vision goggles on. I guess you would because these would have light amplification properties that would enable you to see fainter objects - that's the whole point of them.

Friday, 14 June 2013

solar system - Hamiltonian of general three body problem

The condition $$sum_{i=1}^{3}m_iq_i'=0$$
of this paper generalizes to
$$sum_{i=1}^{n}m_iq_i'=0,$$
meaning the barycenter should be at the origin of the frame.



Equation (1) generalizes to
$$H=frac{1}{2}sumfrac{|p_i'|^2}{m_i}-sum_{i=1}^nsum_{j=1, jneq i}^nfrac{m_im_j}{|q_i'-q_j'|}.$$
The first part is the total kinetic energy of the system relative to the barycenter, written in terms of momenta: $$E=frac{1}{2}mv^2=frac{1}{2}mleft(frac{p}{m}right)^2=frac{1}{2}frac{p^2}{m}.$$
The second part is the sum of all self-potential energy between each pair of (spherical) objects, with Newton's gravitational constant $G$ set to 1.



See also n-body problem on Wikipedia.



The 3-dimensional 3-body (or n-body) problem is like the planar version. Just take 3-dimensional vectors for $p_i'$ and $q_i'$.

abstract algebra - Sign of infinite permutations?

This is not an answer per se [Edit: OK, maybe it is! I was a little fuzzy on exactly what was being asked for when I wrote this, and in the past Martin has expressed unhappiness with responses which he feels have not answered his questions.] but it should be useful for those who are thinking about the problem (c.f. Kevin Buzzard's answer) to know the following classic result.



Theorem (Schreier-Ulam): The only nontrivial proper normal subgroups of $S_{infty}$ are $mathfrak{s}_{infty} = bigcup_{n geq 1} S_n$ and $mathfrak{a}_{infty} = bigcup_{n geq 1} A_n$, i.e. the "little symmetric group" of all permutations which move only finitely many elements and its index two alternating subgroup.




Reference: J. Schreier and S. Ulam,
Über die Permutationsgruppe der natürlichen Zahlenfolge. Stud. Math. 4, 134-141 (1933).




Addendum: Certainly this theorem implies that any homomorphism from $S_{infty}$ into a group $G$ which restricts to the sign homomorphism on $mathfrak{s}_{infty}$ must have kernel precisely equal to $mathfrak{a}_{infty}$. Whether this answers the question depends, I suppose, on how much you care about what the induced monomorphism $S_{infty}/mathfrak{a}_{infty} hookrightarrow G$ looks like.

gravity - Would Pluto keep an orbit without its moon?

Yes, a planetoid such as Pluto will be able to orbit no matter how small its mass, so long as its angular momentum is within a range determined by its distance from the sun. In the case of Pluto and its primary moon, Charon, however, things are even more interesting.



If Charon were to magically disappear (ejection would be another story), the point on the line of Pluto's orbit at any one time would be very near the center of Pluto. There are other moons, but their gravitational tug is minimal. The orbital trace of the Pluto/Charon system, however, is not even inside Pluto.



Most planets have satellites that are very much less massive than they are, and thus the center of their systemic rotation is fairly close to the primary's center. The masses of Pluto and Charon, however, are significantly closer in size.




The Pluto–Charon system is noteworthy for being one of the Solar System's few binary systems, defined as those whose barycenter lies above the primary's surface . . . [108] This and the large size of Charon relative to Pluto has led some astronomers to call it a dwarf double planet.[109] . . . This also means that the rotation period of each is equal to the time it takes the entire system to rotate around its common center of gravity.[66]




Therefore, for practical purposes, with respect to the orbit of Pluto, it's far simpler to determine the trajectory of the Pluto/Charon barycenter -- which is completely above the surface of Pluto.



Pluto/Charon system
An oblique view of the Pluto–Charon system showing that Pluto orbits a point outside itself. Pluto's orbit is shown in red and Charon's orbit is shown in green.

Thursday, 13 June 2013

ho.history overview - Mathematical Tables in Babbage's Library

See: M.R. Williams, "The Scientific Library of Charles Babbage," IEEE Annals of the History of Computing, vol. 3, no. 3, pp. 235-240, July-Sept. 1981, doi:10.1109/MAHC.1981.10028



Abstract: "In the early nineteenth century, Charles Babbage compiled a large scientific library that included many rare works. This paper describes the history, contents, and present location of the library. The contents are classified under twenty-one headings."

redshift - How does gravity affect the wavelength of light?

If, hypothetically, me and my rocket powered flashlight were falling straight toward the center of a black hole. The flashlight is a few kilometers behind me in our travels toward the center of the black hole, but since it is rocket powered, it manages to maintain the exact distance to me for a while.



The point is; The distance between me and my flashlight is constant as long as I am observing it.



The photons coming from the flashlight would obviously not be rocket powered - and they would be affected by the black holes gravitation.



Would the light I see from the flashlight be shifted towards red or blue, even though the distance between me and my dear flashlight is maintained?



If so; switching the positions of me and my flashlight, would it change the color I'd observe?



If we turn off the rocket on the flashlight, I assume it would be redshifted regardless of which were closer to the singularity, and the magnitude of redshift would appear to accelerate?

Wednesday, 12 June 2013

temperature - What is the difference between gas and dust in astronomy?

In astronomy, there is no formal definition of the threshold between gas and dust. Gas can be monoatomic, diatomic, or molecular (or made of photons, in principle). Molecules can be very large, and in principle, dust particles are just very large molecules. I've seen various authors use various definitions, ranging from $sim100$ to $sim1000$ atoms.



This is not to say that there isn't a distinct difference between molecules and dust. They have very different properties, but the transition between them is just not perfectly well-defined.



Gas, molecules, and dust can all be hot or cold, but if it becomes too hot, larger particles are destroyed in collisions. So while a molecular cloud typically is very cold and consists of both gas and dust, dust tends to be destroyed (though not completely) in the $mathrm{H,II}$ regions around hot stars through collisions with other grains, sputtering due to collisions with ions, sublimation or evaporation, or even explosions due to ultraviolet radiation (see e.g. Greenberg 1976).



To answer your final question, I haven't heard the term "gas" used for dust particles, but metals($^dagger$) and molecules can both be referred to as gas. For instance, $mathrm{Mg,II}$ gas is routinely used to detect distant galaxies, and molecular clouds contain $mathrm{H}_2$ and $mathrm{CO}$ gas.
In the interstellar medium, roughly half of the metals are in the gas phase, while the other half is in dust.



$^{^dagger}$"Metals" in the astronomical sense, i.e. all other elements than hydrogen and helium.

observatory - What is the current routine of modern astronomy?

What you say is not quite true: the search for exoplanets is clearly intensive, but it is far from the only things astronomers are looking at. Most of the time, in two words, the situation is: resolution & wavelength. Whatever the field (if you are interested in galaxies, interstellar medium, stars and so on) you want more resolution, to resolve smaller scales (most stars are still points even with our best telescopes, or we are still far from resolving individual stars in galaxies!), to have more informations, to understand better the underlying physics. You want more wavelength, because spectroscopy gives you much more phsyical information than a single wavelength observation, for example. And to combine both is sometimes challenging: to have high resolution observations in infrared is not that easy, and it can be crucial for some fields (if you ever want to see a star forming, you better observe it in infrared, since this baby is embedded in its gas cloud that shields very efficiently its radiation).



That being said, the routine tasks of an astronomer would be



  1. extract information from the current data. It involves a lot of coding, with Python, IDL, or more specific astronomy-oriented languages as IRAF or MIDAS. Data reduction is an important part of the job, because it is in general challenging to extract data from the raw signal you will get.

  2. write papers about these data and the inferred informations

  3. read a lot of papers to stay tune with the latest discoveries of other teams

  4. write proposals to ask for more observation time/better observations/bigger telescopes

  5. drink a lot of coffee

The three first points probably takes an almost equal amount of time for any astronomer; point 4 takes even more time for older astronomers; point 5 is also crucial for all the good things that come out of discussions over a good ol' bowl of coffeine.



Complements:



To answer to your comment and to give you an overview of the current research, I can think of:



  • Hershel data in infrared. People try to understand better the interstellar medium and the star formation processes in our galaxy, the formation of early galaxies, and the chemical composition and evolution of the Universe with these data.

  • Planck data in longer wavelength. These data are useful to understand the first age of the Universe (searching for anistropy in the CMB), but also to have an other view of the galaxy and the interstellar medium in these wavelengths.

  • Very Large Telescope data. There are plenty of different kind of data out of these telescope, mostly in the visible and infrared ranges, and mostly in spectroscopy. Almost everything is studied with these data, from galaxies evolution to stars in the nearby galaxies.

  • ALMA data in millimeter/submillimeter ranges. The same kind of objects are studied with ALMA and Herschel: early galaxies, interstellar medium and molecular clouds. How galaxies form and evolve? How stars form? In which environment? What are the dominent processes in star formation?

  • HESS data, in the gamma ray range. Gamma rays offer a window on the non-thermal Universe, i.e. all the extreme events occuring in the Universe. It can give precious informations about gamma-ray bursts, supernovæ, AGN (the active galactic nuclei) etc.

That's for the big projects (with a strong European bias, sorry folks I know better what's done on this side of the ocean). You can add to that all the missions to study exoplanets (like Kepler), missions to study our solar system planets (Cassini, Huygens, Messenger, Juno, all the Mars missions etc.), plus all the other facilities around the world to study anything and everything, from stellar dynamic to planet's composition. The main problem is always to understand how such structure (from the large scale structures in the Universe to small scale structures in the galaxy), object (from galaxies to satellites), phenomenon occurs, forms, appears. To understand what are the dominant physical processes at play.



Astronomy is still craving for data; the more data you have, the better your statistic will be, and hopefully the better your understanding will also be.

atmosphere - Shall we say now that Pluto is "larger than Mercury" ?

The implication of the question is that this extra 1000 miles should be added to Pluto's radius. The answer is no. For all of the solid planets, it's that solid surface (or solid+liquid surface in the case of the Earth) that counts, not the outer reaches of the atmosphere. The surface is a clear-cut, non-arbitrary boundary. The atmosphere? They can extend a long way out. A non-arbitrary boundary is always going to be preferred over an arbitrary one.



That's not possible in the case of the Sun and the giant planets. A somewhat arbitrary boundary is needed for those bodies. That somewhat arbitrary boundary explicitly excludes the upper reaches of the atmosphere, which extends out for many thousands of kilometers. For the giant planets, some use a tenth of the Earth's atmosphere as defining the arbitrary "surface", others, one atmosphere, yet others, ten atmospheres. That factor of 100 variance in pressure amounts to about a hundred kilometers. That's not that much considering how big the gas giants are.

Tuesday, 11 June 2013

soft question - Work of ICM 2010 Fields medalists (and other prizewinners)

I found the "work profiles" by Julie Rehmeyer on the ICM website to be good for a very high level view. But I like the idea of getting many different perspectives.



I can only speak to Dan Spielman. His most widely known work is in two areas



(1) Smoothed Analysis of Algorithms: The simplex algorithm for linear programming is fast in practice, but slow in theory. This gap is bothersome to theorists. "Smoothed analysis" provides a new method of analyzing the running time of algorithms by looking at how fast an algorithm runs on random perturbations of worst case inputs. Spielman and Shang-Hua Teng proved that the simplex algorithm runs "fast in theory" in their framework. This created a degree of excitement and inspired much further work. My impression is that while smoothed analysis is a very nice model the analysis is not necessarily easy to do and hence worst case analysis still predominates. But it is a useful tool to have for those anomalous cases like the simplex algorithm.



(2) Fast Error Correcting Codes Low Density Parity Codes have been known since the 60s, but were rarely used because they were considered too computationally inefficient. In the mid 90s new algorithms were discovered by Spielman and others which made these codes appear more attractive. In particular Spielman invented codes based on expander graphs and proved that they could be encoded and decoded particularly fast. In many cases these codes can almost achieve the theoretical bound for information transmission. This is a large and very active area of research in Electrical Engineering and many of the advances came from that community. These codes are now considered competitive with the best and are widely used in practice.

big bang theory - How can the universe be infinite?

I think the source of confusion between the two concepts - the Big Bang singularity and an infinite universe - is the misconception that the universe began as a finite expanse originally. This misconception easily arises from analogies using present-day logic and numbers that were not applicable in the early universe. For example, I've heard it said that shortly after the Big Bang, the entire observable universe was the size of a grapefruit, but that explanation neglects to mention that grapefruits would have been much larger then.



The problem is that space is where we can measure how large something is, but space expands, so something that is a certain distance away currently was a lot closer a long time ago, even if neither object has moved in the normal sense. As an analogy to help illustrate the effect:



You and I are standing on a preposterously large deflated balloon. You set down a meter stick, make a mark on the balloon at each end and we each stand on one mark and are now a meter apart. Then I turn on a pump and start inflating the balloon. As the balloon inflates, the surface stretches out and you and I appear to get farther from each other, when though we're not 'moving' (e.g. walking away from each other): now we have conflicting sets of information to consider; according to the marks on the balloon surface we're still one meter apart, but according to the meter stick in your hand (which is not expanding) the distance is greater than that.



Note that while I called the balloon "preposterously large," it could have been infinitely large and still behave the same way. I point this out because I've seen in comments on other answers that you don't see how space could be both infinite and expanding - that if it's expanding, then it must have been previously finite. That is incorrect: in fact, because infinity is the quality of unboundedness, something that is infinitely large can always get bigger, because by definition there is no upper bound on its size.



Note also that if you recorded the earlier analogy in reverse, it would appear that space was shrinking such that a several-meters distance between us reduced over time to one meter. If you continue shrinking the universe in such a manner, it eventually becomes the case that there is zero distance between us. And if you apply that to a scenario where there are people infinitely distributed across the balloon, all of them would come closer together as the balloon deflated, until there was zero distance between any two people... in theory, at least, since real human beings have size. Energy and space don't have size, however, so at the point of the Big Bang, space was still infinite (since an infinite/unbounded space cannot shrink to become finite/bounded) but the distance between any two points in space was zero.



So if you could go back in time to the Big Bang you'd see an infinite ocean of energy, since all the energy was "shoulder-to-shoulder" (infinitely dense) but it rapidly expands (and therefore cools) to the point that basic particles can form, then later matter and molecules. Of course since your size would depend on the metric of space, it wouldn't necessarily look like space was expanding, but simply like the energy and matter were cooling down. In fact we still see this as an effect of spatial expansion in the redshift of light from distant sources: the light "cools down" or loses energy along the way because it is stretched out on its journey through space.

soft question - Your favorite surprising connections in Mathematics

My favorite connection in mathematics (and an interesting application to physics) is a simple corollary from Hodge's decomposition theorem, which states:



On a (compact and smooth) riemannian manifold $M$ with its Hodge-deRham-Laplace operator $Delta,$ the space of $p$-forms $Omega^p$ can be written as the orthogonal sum (relative to the $L^2$ product) $$Omega^p = Delta Omega^p oplus cal H^p = d Omega^{p-1} oplus delta Omega^{p+1} oplus cal H^p,$$ where $cal H^p$ are the harmonic $p$-forms, and $delta$ is the adjoint of the exterior derivative $d$ (i.e. $delta = text{(some sign)} star dstar$ and $star$ is the Hodge star operator).
(The theorem follows from the fact, that $Delta$ is a self-adjoint, elliptic differential operator of second order, and so it is Fredholm with index $0$.)



From this it is now easy to proof, that every not trivial deRham cohomology class $[omega] in H^p$ has a unique harmonic representative $gamma in cal H^p$ with $[omega] = [gamma]$. Please note the equivalence $$Delta gamma = 0 Leftrightarrow d gamma = 0 wedge delta gamma = 0.$$



Besides that this statement implies easy proofs for Poincaré duality and what not, it motivates an interesting viewpoint on electro-dynamics:



Please be aware, that from now on we consider the Lorentzian manifold $M = mathbb{R}^4$ equipped with the Minkowski metric (so $M$ is neither compact nor riemannian!). We are going to interpret $mathbb{R}^4 = mathbb{R} times mathbb{R}^3$ as a foliation of spacelike slices and the first coordinate as a time function $t$. So every point $(t,p)$ is a position $p$ in space $mathbb{R}^3$ at the time $t in mathbb{R}$. Consider the lifeline $L simeq mathbb{R}$ of an electron in spacetime. Because the electron occupies a position which can't be occupied by anything else, we can remove $L$ from the spacetime $M$.



Though the theorem of Hodge does not hold for lorentzian manifolds in general, it holds for $M setminus L simeq mathbb{R}^4 setminus mathbb{R}$. The only non vanishing cohomology space is $H^2$ with dimension $1$ (this statement has nothing to do with the metric on this space, it's pure topology - we just cut out the lifeline of the electron!). And there is an harmonic generator $F in Omega^2$ of $H^2$, that solves $$Delta F = 0 Leftrightarrow dF = 0 wedge delta F = 0.$$ But we can write every $2$-form $F$ as a unique decomposition $$F = E + B wedge dt.$$ If we interpret $E$ as the classical electric field and $B$ as the magnetic field, than $d F = 0$ is equivalent to the first two Maxwell equations and $delta F = 0$ to the last two.



So cutting out the lifeline of an electron gives you automagically the electro-magnetic field of the electron as a generator of the non-vanishing cohomology class.

Monday, 10 June 2013

fundamental astronomy - Red shifting galaxies

Assuming an open model of the universe and a long enough timeline galaxies which appear "normal" today would become more red shifted over the course of observation. I put normal in quotes because there are no galaxies which are relatively static to ours. Any galaxy which has movement will have some "shift" to them, and all galaxies are moving away from us.



It is possible that eventually the light from them could move into the infrared and even the microwave in extreme cases.

homological algebra - Is the first filtration Hausdorff?

Maybe this is too technical and elementary, but I cannot make up my mind, nor find a reference.



The situation is the following: let $X$ be a double cochain (right half-plane) complex of abelian groups and let



$$
(mathbf{Tot}^{prod} X)^n = prod_{p+q = n}X^{pq}
$$



denote its total-product complex. The first filtration on $X$,



$$
{}_I F^s(X) =
begin{cases}
X^{pq} & text{if} p geq s , \
0 & text{otherwise}
end{cases}
$$



gives you the filtration on $mathbf{Tot}^prod X$:



$$
(F^s mathbf{Tot}^prod X)^n = prod_{p+q=n , pgeq s} X^{pq}
$$



and you have, as with any filtered differential complex $(A, F, d)$, an induced filtration in cohomology:



$$
F^pHA = mathbf{im} (HF^pA longrightarrow HA) .
$$



My question is the following: is the filtration induced by ${}_I F$ on $H(mathbf{Tot}^prod X)$ Hausdorff? That is,



$$
bigcap_p F^pH(mathbf{Tot}^prod X ) = 0 ?
$$



I couldn't find an answer in the literature. Weibel's book says that in this situation the spectral sequence arising from the first filtration is "convergent". Unfortunately, for Weibel this only means that you have an isomorphism $E_0 HA = E_infty A$. Cartan-Eilenberg "Homological Algebra" doesn't work with the total-product complex, but with the total-sum one:



$$
(mathbf{Tot}^{bigoplus} X)^n = bigoplus_{p+q = n}X^{pq}
$$



For this one, I think, the answer is "yes": if I had some cohomology class



$$
[x] in bigcap_p F^pH^n(mathbf{Tot}^bigoplus X ) quad Longleftrightarrow quad [x] in F^pH^n(mathbf{Tot}^bigoplus X ) text{for all} p
$$



then I could find representatives for $[x]$ like



$$
(0, stackrel{p-1}{dots}, 0, x^{p,n-q}, x^{p+1, n-q-1}, dots )
$$



for all $p geq 0$. Since there is only a finite number of $x^{pq} neq 0$ for each element of $(mathbf{Tot}^bigoplus X)^n$, in a finite number of steps, I can be sure that I can find a representative for $[x]$ which is zero, so $[x]=0$.



But this reasoning doesn't work with $mathbf{Tot}^prod X$: you can only state with certainty that, for every $p$ there is some $x_p in F^p$ and $b_p$ such that



$$
x - x_p = db_p quad Longleftrightarrow quad x- db_p in F^p .
$$



These equations have a nice interpretation: if you topologize $mathbf{Tot}^prod X$ taking as basic open sets $x + F^p$ for all $x in mathbf{Tot}^prod X$ and $p$, they read:



$$
(db_p) longrightarrow x .
$$



That is, $x$ is a limit of coboundaries. But, unless the filtration is finite, this doesn't imply that $x$ itself is a coboundary, does it?



So I could ask my question this way: in this situation, is the set of coboundaries closed?

gr.group theory - Characterising extendable automorphisms

The papers in the accepted answer by @SixWingedSeraph all refer to a somewhat more specific problem, that of extending an automorphism of a normal subgroup to a larger group. Although the questioner says something about a group extension (which would usually imply normality of $H$), he did not specify that $H$ should be normal.



Extending automorphisms of normal subgroups
The Wells paper "Automorphisms of group extensions" is the first paper I know about on this topic. In the situation where $N$ is a normal subgroup of $G$, such that $alpha : G/N rightarrow operatorname{Out}(N)$ is the action of the quotient on $N$, Wells finds an exact sequence
$$1 rightarrow Z^1_alpha(G/N, Z(N)) rightarrow operatorname{Aut}(G; N) rightarrow operatorname{Compat}(G/N; N) rightarrow H^2_alpha(G/N, Z(N)). $$
Here $operatorname{Aut}(G; N)$ is the group of all automorphisms of $G$ fixing $N$, and $operatorname{Compat}(G/N; N)$ is all pairs of automorphisms of the quotient $G/N$ and subgroup $N$ satisfying a certain compatibility condition. $Z^1$ and $H^2$ are the 1st cocycle space and the 2nd group cohomology, respectively. It should be noticed that the last map is not a group homomorphism.



I haven't managed to get my hands on the Robinson paper "Applications of cohomology to the theory of groups", but I understand it has some explanation and applications of Wells' result, as well as a new proof. The Jin paper "Automorphisms of groups" restates the result of Wells in somewhat different language, and applies it to the case where one wants to find an automorphism of $N$ which acts trivially on $G/N$.



I'll remark that one interesting case is where $N$ is the direct product of isomorphic finite simple groups (e.g., a nonabelian chief factor). Here $Z(N)$ is trivial, which makes the long exact sequence above collapse rather nicely. But in this case one can obtain similar results by using the fact that if $N$ is center free, then any homomorphism $alpha : Q rightarrow operatorname{Out}(N)$ uniquely determines an extension of $Q$ by $N$. This is however only mildly more elementary, as proving this fact also requires group cohomology techniques -- see chapter IV.6 of Ken Brown's book "Cohomology of groups".



Extending automorphisms of arbitrary subgroups
Skupp shows in "A characterization of inner automorphisms" that an automorphism $varphi$ of a group $H$ extends to an automorphism in every group $G$ with $H$ embedded in $G$ if and only if $varphi$ is inner (i.e., obtained via conjugation by some element). Pettet showed that the same holds if we restrict $G$ to be finite in "On inner automorphisms of finite groups" and "Characterizing inner automorphisms of groups".



So I guess the overall answer to the original question is "sometimes not". If you have a specific situation which is still of interest, you might try looking at the second Pettet paper to see if it helps you show the automorphism extends or does not extend. It looks like it would be a hard problem to find a general characterization.

monoidal categories - Is there a relative version of Tannakian reconstruction?

Akhil,



Let $mathcal{C}$ be a tensor category, and let $(A,mu) in mathcal{C}-Alg$ be an algebra in $mathcal{C}$. So $mu:Aotimes Ato A$ is a morphism in $mathcal{C}$ and $otimes$ here means the $otimes$ in $mathcal{C}$ (of course there isn't another one around at this point, but I mean to emphasize it's not just the $otimes$ of Vect).



In this context, it makes sense to talk about $A$-modules in $mathcal{C}$, whose definition you can guess. These form a k-linear abelian category $D$ with a forgetful functor to $mathcal{C}$ which forgets the $A$ action.



Now if $mathcal{C}$ has a fiber functor F, then $mathcal{C}$ is realized as the Hopf algebra $End(F)$, as you said (well $End(F)^{op}$ I think , but nevermind). The algebra $A$ can be pushed forward by $F$ to an ordinary algebra $F(A)$ in Vect. However, $D$ is not the category of $A$-modules, but a well-known proposition tells that $D$ is the category of $Artimes H$-modules, the semi-direct product you asked about.



Notice that no part of the discussion so far asked for any symmetric structure on $mathcal{C}$, and also $A$ is only an algebra. To define a bialgebra in $mathcal{C}$, however, one needs $mathcal{C}$ to be braided, because the compatibility between $Delta$ and $mu$ will use the braiding. In your case braiding just means symmetry. I never worked this out in detail, but I imagine that if $A$ is actually a bi-algebra in $mathcal{C}$, then $D$ gets endowed with a monoidal structure, and that $D$ is the category of $Artimes H$-modules, where $Artimes H$ is a bi-algebra. Likewise if $A$ is Hopf in $mathcal{C}$, then $D$ is tensor and $Artimes H$ is a Hopf algebra, and $Dcong A rtimes H$-mod.




Thus ends the part where I'm pretty sure I'm not saying anything too incorrect. Below I will try to answer your actual question. I would not trust it though until somebody smarter agrees with it.




So, now your question becomes (let's revert to considering algebras at the top and not Hopf algebras, since it should be clear how to extend): Given a functor $F:Dto mathcal{C}$, when is $F$ the forgetful functor corresponding to some algebra $A in mathcal{C}$? I think for this, it will be enough to assume (in addition, of course, to assuming that $F$ is faithful and exact) that $D$ has a projective generator $M$ (although maybe this is guaranteed by a lesser assumption?). This is definitely necessary to be able to realize $D$ as some category of modules of a ring, as you desire, and I imagine that you then let $A=underline{Hom}(M,M)$ (meaning $mathcal{C}$-internal homs, which are distinct from $Hom_mathcal{C}(M,M)$!), which will be an algebra in $mathcal{C}$, and you can plug into the above.



You should definitely read Ostrik's http://arxiv.org/abs/math/0111139 and other papers by Etingof, Nikshych, and Ostrik about fusion and finite tensor categories.

Sunday, 9 June 2013

star - What is the most oblate astronomical object known?

Saturn is the most oblate planet in the solar system. If the equatorial diameter is $a$ and the polar diameter is $b$ then its oblateness, $(a-b)/a = 0.1$.



We do not know the oblateness values for more than one or two exoplanets and even these are somewhat uncertain, but are thought to be lower than Saturn's value. For example http://adsabs.harvard.edu/abs/2010ApJ...709.1219C



The oblateness of a star depends on the ratio of it gravitational acceleration to centrifugal acceleration at the equator. So the most oblate stars ought to be the ones with big radii and fast rotation rates (i.e. types of stars that would have low surface gravities if non-rotating).



Giant stars come in two basic types - red giants, which are evolved intermediate and low-mass stars. These tend to be slowly rotating, though there is a class called FK Com variables which are fast-rotating red giants.



The other class is the blue giants, which are high mass main sequence, or slightly evolved, stars.



For maximum oblateness you want the star with the highest value of:
$$ f = frac{R omega^2}{GM/R^2} = frac{R^3 omega^2}{GM}$$
where $M$ is the stellar mass and $omega$ its angular velocity. Often we don;t know $omega$, but can estimate the rotation velocity at the equator $v=Romega$. Hence
$$f = frac{R^3 v^2}{GMR^2} = frac{Rv^2}{GM}$$



A possible record holder is the O-type star VTFS 102 in the Large Magellanic Cloud.
This probably has a mass of $25M_{odot}$ a radius of about $10R_{odot}$ and has a measured rotation speed of $geq 600$ km/s. Thus its value of $f$ is 0.75, compared with a value of $f=0.15$ for Saturn.



EDIT: Due to Michael B - what about "millisecond" pulsars? The shortest periods are about 1.4 ms, the radii are about 10 km and the masses are likely to be about 1.4$M_{odot}$. Do the sums and you find $f=0.1$. So, (at least in Newtonian mechanics) they should be no more oblate than Saturn.



A further contender might by the dwarf planet Haumea that orbits at 40-50 au from the Sun. It is thought large enough to have achieved its shape via hydrostatic equilibrium between gravity and internal pressures (and rotation). Its mass is $4times10^{21}$ kg, (mean) radius is 700 km and it has a rotation period of 3.91 hours. These numbers give $f=0.25$. It has been modelled using this, and matching its light curve, to be a triaxial ellipsoid (rather than oblate) with axis ratios 2 : 1.5 : 1 (http://adsabs.harvard.edu/abs/2006ApJ...639.1238R).

Saturday, 8 June 2013

lo.logic - Can we recognize when a category is equivalent to the category of models of a first order theory?

The categories of models with elementary embeddings are accessible categories. (The cardinal κ is related to the size of the language via Löwenheim-Skolem; the κ-presentable, aka κ-compact, objects are models of size less than κ.) Michael Makkai and Bob Paré originally describe this idea in Accessible categories: the foundations of categorial model theory (Contemporary Mathematics 104, AMS, 1989). However, still more can be found in later works such as Adámek and Rosický, Locally presentable and accessible categories (LMS Lecture Notes 189, CUP, 1994).



More generally, abstract elementary classes can also be viewed as accessible categories. Thus accessible categories include categories of models of infinitary theories, theories with generalized quantifiers, etc. In fact, accessible categories can always be attached to such structures, but I don't know the exact characterization of the categories that arise from models of theories of first-order logic. The Yoneda embedding can sometimes be used to attach first-order models to accessible categories, such as when the accessible category is strongly categorical (Rosický, Accessible categories, saturation and categoricity, JSL 62, 1997). On the other hand, you can reformulate a lot of model theoretic concepts in general accessible categories. There are more than a few kinks along the way and not all of it has been done, but the more I learn the more I find that this is actually a very interesting and powerful way to approach model theory.




Let me try to explain the situation in greater detail. I guess the correspondences are better explained in terms of sketches. (This nLab page needs expansion; Adámek and Rosický give a nice account of sketches; another account can be found in Barr and Wells.) A sketch asserts the existence of certain limits and colimits, or just limits in the case of a limit sketch, taken together these assertions can be formulated as a sentence in L∞,∞ (sketchy details below). Like such sentences, every sketch S has a category Mod(S) of models. Sketches and accessible categories go hand in hand.



  • If S is a sketch, then Mod(S) is an accessible category, and every accessible category is equivalent to the category of models of a sketch.


  • If S is a limit sketch, then Mod(S) is a locally presentable category, and very locally presentable category is equivalent to the category of models of a limit sketch.


When translated into L∞,∞, a limit sketch becomes a theory with axioms of the form



$forallbar{x}(phi(bar{x})toexists!bar{y}psi(bar{x},bar{y})),$



where $phi$ and $psi$ are conjunction of atomic formulas (and the variable lists $bar{x}$ and $bar{y}$ can be infinite). When the category is locally finitely presentable, then these axioms can be stated in Lω,ω. Theories with axioms of this type are essentially characterized by the fact that Mod(T) has finite limits.



  • If T is a theory in Lω,ω and Mod(T) which is closed under finite limits (computed in Mod(∅)), then Mod(T) is locally finitely presentable category (and hence finitely admissible).


  • Every locally finitely presentable category is equivalent to a category Mod(T) where T is a limit theory in Lω,ω (i.e. with axioms as described above).


It is natural to conjecture that this equivalence continues when ω is replaced by ∞. Adámek and Rosický have shown in A remark on accessible and axiomatizable theories (Comment. Math. Univ. Carolin. 37, 1996) is that for a complete category being equivalent equivalent to a (complete) category of models of a sentence in L∞,∞ and being accessible are equivalent provided that Vopenka's Principle holds. In fact, this equivalence is itself equivalent to Vopenka's Principle. (It is apparently unknown whether accessible can be strengthened to locally presentable.)



Now, if T is a sentence in L∞,∞, then the category Elem(T) (models of T under elementary embeddings) is always an accessible category. The category Mod(T) is unfortunately not necessarily accessible. When translated into L∞,∞ sketches become sentences of a special form. A formula in L∞,∞ is positive existential if it has the form



$bigvee_{i in I} existsbar{y}_i phi_i(bar{x},bar{y}_i)$



where each $phi_i$ is a conjunction of atomic formulas. A basic sentence in L∞,∞ is conjunction of sentences of the form



$forallbar{x}(phi(bar{x})topsi(bar{x}))$



where $phi$ and $psi$ are positive existential formulas.



  • A category is accessible if and only if it is equivalent to a category Mod(T) where T is a basic sentence in L∞,∞.

It would be great if one could simply replace accessible by finitely accessible and sentence in L∞,∞ by theory in Lω,ω, as in the locally presentable case above. Unfortunately, this is simply not true. The category of models of the basic sentence $forall xexists y(x mathrel{E} y)$ in the language of graphs is accessible but not finitely accessible. A counterexample in the other direction is the category of models of $bigvee_{n<omega} f^{n+1}(a) = f^n(a)$, which is finitely accessible but not axiomatizable in Lω,ω.

Friday, 7 June 2013

ag.algebraic geometry - Elkies' supersingularity theorem in higher dimension

If I understand your question correctly, the "easy" version of the question you ask is unknown in dimension $ge 3$, and is basically easy for surfaces of interest.



These questions are certainly unknown in dimension $ge 3$, or else the Sato-Tate conjecture for weight $>3$ would have been known $12$ months ago, rather than $4$ months ago.



I take it you are only asking that the action of Frobenius is neither almost always trivial ($T(p) = 0$) or acts as the identity ($T(p) = 1$), which is much weaker than asking that there exist any supersingular primes.
Assume otherwise. Fix a prime $ell$, and Let $V$ be the $ell$-adic etale cohomology
$H^2(X)$ with the usual action of $mathrm{Gal}(overline{mathbf{Q}}/mathbf{Q})$. Let $F_p(T)$ denote the characteristic polynomial of Frobenius. It has coefficients in $mathbf{Z}[x]$. Say one wants to show that there exist infinitely many primes $p$ such that $T(p) ne 0$. You are imposing the condition that $F_p(T) equiv T^n mod p$ for all but finitely many $p$, where $n = mathrm{dim}(V)$.



Choose a prime $L > 2n$. There will a positive density of primes such that $F_p(T) equiv (T-1)^n mod L$.
If $p equiv 1$ modulo $L$, These conditions together imply that the trace of $F_p(T)$, which is a priori an integer, is $np mod pL$. By the Weil conjectures, the roots of $F_p(T)$ have absolute value $p$, and thus, since $L > 2n$, they must all equal $p$. By Cebatarev density, it follows that a finite index subgroup of the Galois group acts (on the semisimplification of $V$) via the cylotomic character on $V$. By looking at Hodge-Tate weights this implies that $h^{2,0} = h^{0,2} = 0$, and so (for example) one has a contradiction for a K3-surfaces.



If one wants to rule out that $T(p) = 1$ for (almost all) $p$, one is imposing the condition that $F_p(T) = (T-1)^n mod p$. In the same way one obtains an open subgroup of the Galois group such that the trace of Frobenius is always $n$, and then one deduces that $h^{0,2} > 0$ and $h^{2,0} = 0$, which can never happen.



EDIT: Forgot to mention that it was Ogus who proved in ~70's that Abelian surfaces had infinitely primes of ordinary reduction, presumably by a very similar argument. Well, except for the comparison theorem of Faltings I used above...