Wednesday, 31 July 2013

what is the tensor product of two anti-commutative DG-algebras?

The correct setting for differential graded vectors spaces is as follows. Recall first the category of $mathbb Z$-graded vector spaces. As a category this consists of functors from the set $mathbb Z$ (thought of as a category with no morphisms) to $operatorname{Vect}$, i.e. objects consist are sequences of vector spaces, and morphisms are sequences of linear maps. (There are variations: one can insist that the vector spaces be trivial except for finitely many of them, for example, and that the non-trivial vector spaces be finite-dimensional.) As a monoidal category, the tensor structure adds degree. Only when you introduce the braiding does the category become interesting: we will take the "Koszul" braiding, so that classically odd-degree terms "anticommute". This braiding is symmetric.



Within the symmetric monoidal category $mathbb Ztext{-Vect}$ of $mathbb Z$-graded vector spaces there is a special Lie algebra, which is the unique (necessarily commutative) Lie algebra structure on the $mathbb Z$-graded vector space with one dimension in degree $1$ and all other degrees trivial. (The only bracket is the $0$ one because, by construction, the bracket must add degree, and so must land in the degree-two part, which is zero-dimensional.) Suggestively calling this Lie algebra $mathfrak{d!g}$, a differential graded vector space is nothing more nor less than a $mathfrak{d!g}$-module (in $mathbb Ztext{-Vect}$).



Let $mathfrak{d!g}text{-mod}$ denote the category of representations of $mathfrak{d!g}$. It is a symmetric monoidal category on account of it being the representation theory of a Lie algebra: the symmetric monoidal structure is inherited from $mathbb Ztext{-rep}$, so in particular there is the Koszul rule. A differential graded algebra is an algebra object in this category, and it is "anticommutative" in the classical sense if it is commutative in the categorical sense: the symmetric structure (the Koszul rule) determines for any two $mathfrak{d!g}$-modules $A,B$ a canonical isomorphism $text{flip}_{A,B}: Aotimes B to Botimes A$, and an algebra $m_A: Aotimes A to A$ is commutative if $m_A = m_A circ text{flip}_{A,A}$.



Given two algebras $(A,m_A),(B,m_B)$ in any symmetric monoidal category, their tensor product is the algebra structure on $Aotimes B$ given by: $$m_{Aotimes B} = (m_A otimes m_B) circ (text{id}_A otimes text{flip}_{A,B} otimes text{id}_B) : A otimes B otimes A otimes B to Aotimes B$$ If $A,B$ are both commutative, so is $Aotimes B$.



This categorical mumbo-jumbo exactly recovers the multiplication that you are looking for. I hope also that it illustrates that it is very naturally part of a larger story, and does not come out of the blue.

Date and time system on moons

I really love this question. Were we to establish a colony on our moon, we would probably want a calendar/time system which actually corresponds to physical patterns and cycles on the moon, but also something which is easy to track and keep in sync with Earth, since the two civilizations would certainly be in close contact.



In a way, we already do something this here on Earth with time zones, though time zones are just offsets within the same second/minute/hour/day system, so people throughout the world are in different times, but they stay in sync. But beyond that, our entire calendar system (we currently use the Gregorian Calendar), is mostly based on our own planet's positions relative to the sun. It just wouldn't make sense on the moon. Earth days on the moon would just be arbitrary 24-hour blocks of time; they wouldn't correlate at all to whether it's light or dark outside.



So, if we really wanted to keep some logical synchronization between earth and moon calendars, we would probably want to adjust here on earth to a lunar-based calendar. Most older civilizations did this, using a full lunar cycle to indicate a "month." But here's the (cool) thing: the moon is a little lopsided and way gravity works, the exact same side is always facing us. We never see the back of the moon. So, it's rotating at exactly the same speed as it's revolving around us, meaning a "day" on the moon lasts one full lunar cycle: just over 29 earth days. So even the idea of a day, how long a day is, and what you could accomplish in a day is so far from Earth's that it's hard to even compare the two.



Maybe one idea is to have weeks or months here on Earth which would line up with days on the moon. Earth would still have its 24-hour days and the Moon would have its 709-hour days, but at least we could predict Moon time using our own calendars. One proposal is the The Hermetic Lunar Week Calendar which continually adjusts the length of a week to keep the months in sync with the moon, so that a full month is always one lunar cycle (or lunar day). It would be weird but in a future where humans are legitimately split between Earth and Moon, it would probably make sense for both to keep a calendar like that, which takes into account the length of a day on both bodies.

Tuesday, 30 July 2013

cosmological horizon - How can there be anything "beyond" the CMB?


Yet we also have a third axiom:



  1. There are some parts of the universe we can never observe because they are receding away from us at a superluminary speed.



This is simply false, so of course it gets you in trouble if you insist on taking it as axiomatic. The average recession velocity goes luminal at redshift $zapprox 1.4$, while there are observed galaxies and objects at $zsim 8$, e.g., Z8 GND 5296 dwarf galaxy and the GRB 090423 gamma ray burst. Additionally, there are at least some candidates for objects much farther than even that, possibly $zsim 12$.



That means that other than the CMB itself, the most distant and most ancient object we might have observed hails from when the universe as around $370,mathrm{Myr}$ old, which is around a thousand times older than the recombination epoch when the universe first became transparent and hence when the cosmic background radiation was emitted. In terms of redshift, the CMB has $zgtrsim 1100$ or so.




But this surely also implies that we can see beyond the CMB if we see anything which has a red shift indicating an expansion speed very close to $c$.




Something at redshift $zgtrsim 1100$ has a recession velocity of $vgtrsim 3.2c$.




This suggests axiom 2 is incorrect - so what should it be instead?




Since (3) says incorrect things, it's best to throw it out instead.

lie groups - How to construct/characterize "Thermal" sections ?

There were errors in the way I framed the question last time. So doing a major revision this time.




Consider $SU(2)$ as a homogeneous space $SU(2)times SU(2)/SU(2)$ and sections of this principle bundle $x mapsto (g_l(x),g_r(x))$ which are compatible with the projection map $(g_l(x),g_r(x)) mapsto g_lg_r^{-1} = x$. Hence diagonal action on any section by a map from $SU(2)$ to itself ("gauge function") is compatible with this projection.



Now consider two elements $A$ and $B$ of $SU(2)$ which are acting on the base $SU(2)$ as $x mapsto AxB^{-1}$. With respect to this a section of the bundle will be called "thermal" (there are physics motivations) if,



$$sigma(AxB^{-1}) = (A,B).sigma(x)$$



So the condition of being a thermal section seems to be a guage invariant constraint if one restricts to gauges which have the symmetry that $h(x)=h(AxB^{-1})$.The gauge map acts as $sigma(x) mapsto sigma(x).(h(x),h(x))$.



(All the $.$'s are the standard group multiplication in in $SU(2)times SU(2)$)



And by the first criteria of what is a valid section all sections can be gauge transformed into one another another since any section giving $(g_l(x),g_r(x))$ is gauge equivalent to the "canonical section" $(I,x^{-1})$ by the gauge function $g_l(x):SU(2)mapsto SU(2)$ since $g_l(x)g_r(x)^{-1}=x$)



Is there a known mathematically concept equivalent to this?
Like any mechanism by which given an $A$ and $B$ and a homogeneous space $G/H$, one would be able to manufacture "thermal" sections for it?



For the specific homogeneous space given it also happens that the push forward of the standard basis in the tangent space at identity of $SU(2)$ ("Pauli matrices") by the thermal section gives a vielbein for the standard metric on $SU(2)$!.



How generic is this?

ag.algebraic geometry - Is there an example of a variety over the complex numbers with no embedding into a smooth variety?

A really neat well known example is as follows:



Choose a conic $C_1$ and a tangential line $C_2$ in $mathbb{P}^2$ and asssociate to a point $P$ on $C_1$ the point of intersection $Q$ of $C_2$ and the tangent line to $C_1$ at $P$. This gives a birational isomorphism from $C_1$ to $C_2$. Identify the curves by this map to get the quotient variety $phi:mathbb{P}^2rightarrow{X}$ with $C:=phi(C_1)$. Now if there was an embedding of $X$ in a smooth scheme then, there would surely exist an effective line bundle on $X$, say $L$ whose pull back to $mathbb{P}^2$ will obviously be effective. Let us see why this is a contradiction. Let $L'$ be the pullback of $L$ to $mathbb{P}^2$. Note that the degrees of $L'|C_1$ and $L'|C_2$ both coincide with the degree of $L|C$ and are therefore equal. But $L'congmathcal{O}(k)$ and therefore the degrees in question are $2k$ and $k$ respectively for $C_1$ and $C_2$. Therefore $k=0$ and $L'congmathcal{O}$, which is non-effective! A contradiction!



In view of VA's comment, I give a complete proof here for constructing $X$ as a scheme.



In our special case it is a trivial pushout construction: Here I am thinking of $Y$ as $C_1amalg{C_2}$, $Y'=C$ (the quotient by the birational isomorphism above), and $Z=mathbb{P}^2$, but the argument is more general provided any finite set of closed points in $Z$ is contained in an affine open set. $X$ will denote the quotient.



Claim: Suppose $j:Yrightarrow{Z}$ is a closed subscheme of a scheme $Z$, and $g:Yrightarrow{Y'}$ is a finite surjective morphism which induces monomorphism on coordinate rings. Then there is a unique commutative diagram (which I don't know how to draw here, but one visualize it easily): $Yxrightarrow{j}{Z}$, $Yxrightarrow{g}Y'$, $Y'rightarrow{X}$, $Zxrightarrow{h}X$ where $X$ is a scheme, $h$ is finite and induces monomorphisms on coordinate rings and $Y'rightarrow{X}$ is a closed immersion.



Proof: First assume that $Z$ is affine, in which case $Y,Y'$ are both affine too. Let $A,A/I,B$ be their respective coordinate rings. Then $Bsubset{A/I}$ in a natural way. Let's use $j$ again to denote the natural map $Arightarrow{A/I}$. Put $A'=j^{-1}(B)$ and $Spec(A')=X$. The claim is clear for $X$. Also if $Z$ is replaced by an open subset $U$ such that $g^{-1}g(Ucap{Y})=Ucap{Y}$, $X$ would be replaced by $U'=h(U)$ which is an open subset.



Now this guarantees the existence of $X$ once it has been shown that $Z$ can be covered by affine open subsets $U$ such that $g^{-1}g(Ucap{Y})=Ucap{Y}$. But this is obvious in our example. For our example it is also clear from the construction of $X$ that it is actually reduced and irreducible. QED.



I hope this is satisfactory.

Monday, 29 July 2013

Why would a star’s position in the sky change relative to another star right next to it?

The answer is gravity.



Stars have a lot of mass so there for there gravity is strong.



If 1 star gets close to another star that has slightly more mass this would mean that it gets pulled in. As a result the orbit would change.



Edit



This does not affect all stars due to the distance they are from each other. Usually if it affects start it is in binary systems.



Thanks to Joan.bdm for this.

ag.algebraic geometry - When is a product of elliptic curves isogenous to the Jacobian of a hyperelliptic curve?

As damiano makes implicit in his answer, a curve of genus two admits a map of degree N to an elliptic curve if and only if there is an isogeny of degree N^2 of its Jacobian and a product of two elliptic curves. The case N=2 also admits the following simple description: a genus two curve (over an alg. closed field) is a double cover of an elliptic iff it can be written as $y^2 = f(x^2)$, where $f$ is a squarefree cubic with $f(0) neq 0$.



One can say slightly more. Given a map $f : C to E_1$ of degree N, we get also a map $f_ast : mathrm{Jac} ; C to E_1^vee$. If we assume that f does not factor through an isogeny then the kernel is connected so we get a second elliptic curve E2 lying as a subgroup on the Jacobian. So the Jacobian has two elliptic subgroups and it turns out that they intersect exactly in their N-torsion points. We get an induced isomorphism between the N-torsion subgroups which turns out to invert the Weil pairing, and this is the data needed to reconstruct C. Namely, given two elliptic curves and an isomorphism of their N-torsion subgroups inverting the Weil pairing, one may consider the graph of this isomorphism in the product of the elliptic curves. The condition on the Weil pairing ensures that the graph is a maximally isotropic subgroup, so the quotient will have a principal polarization, and (when the quotient is not again a product of two elliptic curves) this is exactly the Jacobian of C with its principal polarization.



This is pretty classical stuff which is well explained in several articles by Ernst Kani and Gerhard Frey. Start with "Curves of genus 2 covering elliptic curves and an arithmetical application." It can also be fruitful to think about this in a general context of Prym varieties -- when we have a degree N map from a genus two curve to an elliptic curve, the second elliptic curve which appears is of course exactly the Prym variety, which in this case is principally polarized.



I know in particular that Frey and Kani worked out some rather precise conditions on when a principally polarized abelian surface arising as a quotient of a product of two elliptic curves as above actually is the Jacobian of a curve. I am not really sure in what article it can be found though. It was something like: pairs of elliptic curves with an isomorphism of their N-torsion are parametrised by $Y(N) times Y(N) / SL(2,{mathbb Z}/N)$, and the locus of such pairs which do not give rise to the Jacobian of a curve is the union of certain Hecke correspondences on Y(N). The description of exactly which Hecke correspondences was pretty complicated but when N is prime it was at least workable. The case of N=2 is easy though: then the "bad" locus is just the image of the diagonal in $Y(2) times Y(2)$.

Sunday, 28 July 2013

ag.algebraic geometry - Indexing the Line Bundles of a Flag Manifold

Following on from this question link text, how are the line bundles of a complex flag variety indexed? Are they still labeled by the integers? If so, why? A representation theory explanition in terms of the homogenous space description of the variety $U(n)/U(k_1) times cdots times U(k_m)$ would be the most useful.



Also, is there an 'obvious' reason why the line bundles should be associated to a representation of $U(k_1) times cdots times U(k_m)$, as opposed to a representation of another quotient description of the variety. For example, $mathbb{CP}^{n-1}$ can be described as a $SU(n)/U(n)$ or as $S^{2n-1}/U(1)$, but the tangent bundle can only be described as associated to a representation of $U(n)$. In general, for a principal $G$-bundle $P$, what vector bundles over $P$ can be described as associated to a representation of $G$? All of them?

Saturday, 27 July 2013

gravity - Determining effect of small variable force on planetary perihelion precession

You may want to use perturbation theory. This only gives you an approximate answer, but allows for analytic treatment. Your force is considered a small perturbation to the Keplerian elliptic orbit and the resulting equations of motion are expanded in powers of $K$. For linear perturbation theory, only terms linear in $K$ are retained. This simply leads to integrating the perturbation along the unperturbed original orbit. Writing your force as a vector, the perturbing acceleration is
$$
boldsymbol{a} = K frac{GM}{r^2c^2}v_rboldsymbol{v}_t
$$
with $v_r=boldsymbol{v}{cdot}hat{boldsymbol{r}}$ the radial velocity
($boldsymbol{v}equivdot{boldsymbol{r}}$) and
$boldsymbol{v}_t=(boldsymbol{v}-hat{boldsymbol{r}}(boldsymbol{v}{cdot}hat{boldsymbol{r}}))$ the rotational component of velocity (the full velocity minus the radial velocity). Here, the dot above denotes a time derivative and a hat the unit vector.



Now, it depends what you mean with 'effect'. Let's work out the changes of the orbital semimajor axis $a$, eccentricity $e$, and direction of periapse.




To summarise the results below: semi-major axis and eccentricity are unchanged, but the direction of periapse rotates in the plane of the orbit at rate
$$
omega=Omega frac{v_c^2}{c^2}
frac{K}{1-e^2},
$$
where $Omega$ is the orbital frequency and $v_c=Omega a$ with $a$ the semi-major axis. Note that (for $K=3$) this agrees with the general relativity (GR) precession rate at order $v_c^2/c^2$ (given by Einstein 1915 but not mentioned in the original question).




change of semimajor axis



From the relation $a=-GM/2E$ (with $E=frac{1}{2}boldsymbol{v}^2-GMr^{-1}$ the orbital energy) we have for the change of $a$ due to an external (non-Keplerian) acceleration
$$
dot{a}=frac{2a^2}{GM}boldsymbol{v}{cdot}boldsymbol{a}.
$$
Inserting $boldsymbol{a}$ (note that $boldsymbol{v}{cdot}boldsymbol{v}_t=h^2/r^2$ with angular momentum vector $boldsymbol{h}equivboldsymbol{r}wedgeboldsymbol{v}$), we get
$$
dot{a}=frac{2a^2Kh^2}{c^2}frac{v_r}{r^4}.
$$
Since the orbit average $langle v_r f(r)rangle=0$ for any function $f$ (see below), $langledot{a}rangle=0$.



change of eccentricity



From $boldsymbol{h}^2=(1-e^2)GMa$, we find
$$
edot{e}=-frac{boldsymbol{h}{cdot}dot{boldsymbol{h}}}{GMa}+frac{h^2dot{a}}{2GMa^2}.
$$
We already know that $langledot{a}rangle=0$, so only need to consider the first term. Thus,
$$
edot{e}=-frac{(boldsymbol{r}wedgeboldsymbol{v}){cdot}(boldsymbol{r}wedgeboldsymbol{a})}{GMa}
=-frac{r^2;boldsymbol{v}{cdot}boldsymbol{a}}{GMa}
=-frac{Kh^2}{ac^2}frac{v_r}{r^2},
$$
where I have used the identity
$(boldsymbol{a}wedgeboldsymbol{b}){cdot}(boldsymbol{c}wedgeboldsymbol{d})
=boldsymbol{a}{cdot}boldsymbol{c};boldsymbol{b}{cdot}boldsymbol{d}-
boldsymbol{a}{cdot}boldsymbol{d};boldsymbol{b}{cdot}boldsymbol{c}$ and the fact $boldsymbol{r}{cdot}boldsymbol{a}_p=0$.
Again $langle v_r/r^2rangle=0$ and hence $langledot{e}rangle=0$.



change of the direction of periapse



The eccentricity vector
$
boldsymbol{e}equivboldsymbol{v}wedgeboldsymbol{h}/GM - hat{boldsymbol{r}}
$
points (from the centre of gravity) in the direction of periapse, has magnitude $e$, and is conserved under the Keplerian motion (validate all that as an exercise!). From this definition we find its instantaneous change due to external acceleration
$$
dot{boldsymbol{e}}=
frac{boldsymbol{a}wedge(boldsymbol{r}wedgeboldsymbol{v})
+boldsymbol{v}wedge(boldsymbol{r}wedgeboldsymbol{a})}{GM}
=frac{2(boldsymbol{v}{cdot}boldsymbol{a})boldsymbol{r}
-(boldsymbol{r}{cdot}boldsymbol{v})boldsymbol{a}}{GM}
=frac{2K}{c^2}frac{h^2v_rboldsymbol{r}}{r^4}
-frac{K}{c^2}frac{v_r^2boldsymbol{v}_t}{r}
$$
where I have used the identity
$boldsymbol{a}wedge(boldsymbol{b}wedgeboldsymbol{c})=(boldsymbol{a}{cdot}boldsymbol{c})boldsymbol{b}-(boldsymbol{a}{cdot}boldsymbol{b})boldsymbol{c}$
and the fact $boldsymbol{r}{cdot}boldsymbol{a}=0$. The orbit averages of these expression are considered in the appendix below. If we finally put everything together, we get
$
dot{boldsymbol{e}}=boldsymbol{omega}wedgeboldsymbol{e}
$
with [corrected again]
$$
boldsymbol{omega}=Omega K frac{v_c^2}{c^2}
(1-e^2)^{-1}, hat{boldsymbol{h}}.
$$
This is a rotation of periapse in the plane of the orbit with angular frequency $omega=|boldsymbol{omega}|$. In particular $langle edot{e}rangle=langleboldsymbol{e}{cdot}dot{boldsymbol{e}}rangle=0$ in agreement with our previous finding.



Don't forget that due to our usage of first-order perturbation theory these results are only strictly true in the limit $K(v_c/c)^2to0$. At second-order perturbation theory, however, both $a$ and/or $e$ may change. In your numerical experiments, you should find that the orbit-averaged changes of $a$ and $e$ are either zero or scale stronger than linear with perturbation amplitude $K$.



disclaimer No guarantee that the algebra is correct. Check it!




Appendix: orbit averages



Orbit averages of $v_rf(r)$ with an abitrary (but integrable) function $f(r)$ can be
directly calculated for any type of periodic orbit. Let $F(r)$ be the antiderivative of $f(r)$, i.e. $F'!=f$, then the orbit average is:
$$
langle v_r f(r)rangle = frac{1}{T}int_0^T v_r(t),f!left(r(t)right) mathrm{d}t
= frac{1}{T} left[Fleft(r(t)right)right]_0^T = 0
$$
with $T$ the orbital period.



For the orbit averages required in $langledot{boldsymbol{e}}rangle$, we must dig a bit deeper. For a Keplerian elliptic orbit
$$
boldsymbol{r}=aleft((coseta-e)hat{boldsymbol{e}}+sqrt{1-e^2}sineta,hat{boldsymbol{k}}right)qquadtext{and}qquad
r=a(1-ecoseta)
$$
with eccentricity vector $boldsymbol{e}$ and $hat{boldsymbol{k}}equivhat{boldsymbol{h}}wedgehat{boldsymbol{e}}$ a vector perpendicular to $boldsymbol{e}$ and $boldsymbol{h}$. Here, $eta$ is the eccentric anomaly, which is related to the mean anomaly $ell$ via
$
ell=eta-esineta,
$
such that $mathrm{d}ell=(1-ecoseta)mathrm{d}eta$ and an orbit average becomes
$$
langlecdotrangle = (2pi)^{-1}int_0^{2pi}cdot;mathrm{d}ell = (2pi)^{-1}int_0^{2pi}cdot;(1-ecoseta)mathrm{d}eta.
$$
Taking the time derivative (note that $dot{ell}=Omega=sqrt{GM/a^3}$ the orbital frequency) of $boldsymbol{r}$, we find for the instantaneous (unperturbed) orbital velocity
$$
boldsymbol{v}=v_cfrac{sqrt{1-e^2}coseta,hat{boldsymbol{k}}-sineta,hat{boldsymbol{e}}}{1-ecoseta}
$$
where I have introduced $v_cequivOmega a=sqrt{GM/a}$, the speed of the circular orbit with semimajor axis $a$. From this, we find the radial velocity $v_r=hat{boldsymbol{r}}{cdot}boldsymbol{v}=v_c esineta(1-ecoseta)^{-1}$
and the rotational velocity
$$
boldsymbol{v}_t = v_cfrac{sqrt{1-e^2}(coseta-e),hat{boldsymbol{k}}-(1-e^2)sineta,hat{boldsymbol{e}}}{(1-ecoseta)^2}.
$$



With these, we have [corrected again]
$$
leftlangle frac{h^2v_rboldsymbol{r}}{r^4}rightrangle =
Omega v_c^2,hat{boldsymbol{k}},
frac{e(1-e^2)^{3/2}}{2pi}int_0^{2pi}frac{sin^2!eta}{(1-ecoseta)^4}mathrm{d}eta
=frac{Omega v_c^2e}{2(1-e^2)}hat{boldsymbol{k}}
\
leftlangle frac{v_r^2boldsymbol{v}_t}{r}rightrangle = Omega v_c^2,
hat{boldsymbol{k}},
frac{e^2(1-e^2)^{1/2}}{2pi}int_0^{2pi}frac{sin^2!eta(coseta-e)}{(1-ecoseta)^4}mathrm{d}eta=0,
$$
in particular, the components in direction $hat{boldsymbol{e}}$ average to zero. Thus [corrected again]
$$leftlangle 2frac{h^2v_rboldsymbol{r}}{r^4}-frac{v_r^2boldsymbol{v}_t}{r}rightrangle
=frac{Omega v_c^2e,hat{boldsymbol{k}}}{(1-e^2)}
$$

Friday, 26 July 2013

exposure - Humans surviving in space

One would not, under normal conditions, die within seconds. This is a common trope in science fiction, but there has been quite a bit of real research into the subject.



Let's speak on each point.



  • The extreme cold

This isn't really a huge problem. Space isn't necessarily cold. The extraordinarily low density of matter in space means that the movement of energy through induction is extremely limited. If one were to step into 'space' one wouldn't feel cold, or even cool. For engineers, often the greater problem is not how to deal with cold, but how to deal with heat, as even though one can't easily release heat through induction, one can certainly gain it through radiation or internal chemical processes.



A person, however, is likely to experience cooling due to the process of liquids in the body evaporating and escaping, taking some of their heat with them. It's unlikely that this would prove fatal though, or at least it's unlikely to be the first thing to prove fatal.



  • The Intense Radiation

Radiation is a problem in space, but it's not going to kill you so quickly.




During a 360-day round trip, an astronaut would receive a dose of about 662 millisieverts (mSv), according to RAD measurements. National space agencies limit exposure to about 1000 mSv or less during an astronaut's entire career; NASA's limit corresponds to a 3% risk of exposure-induced death from cancer. (Kerr, 1031)




Currently, orbital missions get some protection from radiation as they are within the Earth's magnetic field, but for those travelling beyond this there isn't a lot of protection. Part of the reason is that things that provide good protection are also very heavy, which can be quite difficult in the economies of reaching escape velocity. As such, one wouldn't necessarily be exposed to that much more radiation outside of a shuttle or their suit in space than they would otherwise.



  • The Vacuum

This is where things can get messy, but there has been some research on the subject. Obviously one can't breath in a vacuum, in fact the lungs actually begin to dump oxygen, essentially working in reverse due to the lack of air pressure. Our victim is likely to lose consciousness within 10-15 seconds. If they happen to be holding their breath when they become exposed, the pressure difference could cause some real damage to their lungs.



People often theorize that blood would boil in space. Geoffrey Landis disagrees with this on his blog, in which he speaks on this subject. I highly recommend reading through it as I would consider him both thorough and authoritative on the subject.



I couldn't find the official report, but there are frequent references to a 1965 (Earth based) pressurization incident, in which a technician's suit depressurized within a chamber. He remarked afterwords that the last thing he recalled before blacking out was feeling his saliva begin to boil on his tongue.



While your blood arguably won't boil, you would experience the boiling of other liquids in your body, water particularly. This can result in rapid swelling of the body. While not pretty, this isn't likely to kill you. However, nitrogen can also become gaseous due to this, potentially resulting the bends, a condition more closely associated with pressure differences in diving. This condition could prove fatal, but AFAIK there hasn't been a lot of research into this in reference to vacuum exposure (This condition tends to become dangerous during the return to normal pressures).



All of this can be survived, as long as pressure is returned relatively quickly. Within a minute is probably enough to save a person's life with little lasting damage. After 2-3 minutes, it's likely too late. The difficulty is that the unfortunate victim may be able to survive 1-2 minutes of exposure, they really only get 10-15 seconds to do something about the problem. Of those who have survived vacuum pressurization incidents, their survival is mostly credited to the quick thinking of those around them.




To respond to your more specific questions:



  1. Exposure for half of what would generally be fatal

This could vary a lot, like depending on whether they were holding their breath. However, generally it's believed that a person could recover from this with little to no permanent damage. Probably the biggest risks are lung damage due to pressure differences, the bends and other related complications, and brain damage due to oxygen deprivation.



  1. Exposure to space that wouldn't result in imminent or eventual death

A little exposure without other complications will generally be survivable. We don't have a ton of data on the subject, so it's possible that unpredictable complications could arise, and frankly it's not exactly advisable, especially considering the person will lose consciousness quite quickly.



  1. Partial exposure

I can't really rely on any specific source for this, but presumably exposure of an extremity would be perfectly survivable. I'm not quite sure how this would work as it would be very difficult to partially expose someone to a vacuum. The human body isn't exactly leak proof. In theory, you would begin to exhibit symptoms in the exposed body part, but you would remain conscious (assuming your respiratory system isn't exposed) and generally be in control of what's going on. Eventually other complications would likely set in and need to be managed carefully.



  1. Exposed to space from inside a room with a single open door

As noted the radiation and temperature is not terribly dangerous, but with no way to repressurize the room this can quickly turn fatal. The Russian Soyuz II mission ended catastrophically when the capsule carrying the cosmonauts depressurized on way for reentry, killing all three crew members on board. A valve malfunctioned during the reentry, causing depressurization. The craft was recovered after reentry and had not taken any significant damage. It was believed that Patsayev had made attempts to fix the valve in order to save the crew, but unfortunately there was simply too little time to accomplish this. Initial reports suggested that the crew had died of asphyxiation, however autopsies showed that the cause was hemorrhaging in the brain - reportedly caused by oxygen and nitrogen in the blood boiling. Not sure how this works out with Landis's claim that blood does not boil.



Radiation Will Make Astronauts' Trip to Mars Even Riskier
Richard A. Kerr
Science 31 May 2013: 340 (6136), 1031. [DOI:10.1126/science.340.6136.1031]

Wednesday, 24 July 2013

ag.algebraic geometry - Easy special cases of the decomposition theorem?

Well, it depends on what you mean by "easy". A special case, which I find very instructive, is a theorem of Deligne from the late 1960's.




Theorem. $mathbb{R} f_*mathbb{Q}cong bigoplus_i R^if_*mathbb{Q}[-i]$, when $f:Xto Y$ is a smooth projective morphism of varieties over $mathbb{C}$. (This holds more generally with $mathbb{Q}_ell$-coefficients.)



Corollary. The Leray spectral sequence degenerates.




The result was deduced from the hard Lefschetz theorem.
An outline of a proof (of the corollary) can be found in Griffiths and Harris.
It is tricky but essentially elementary.



A much less elementary, but more conceptual argument, uses
weights. Say $Y$ is smooth and projective, then
$E_2^{pq}=H^p(Y, R^qf_*mathbb{Q})$ should be pure of weight $p+q$ (in the sense of Hodge
theory or $ell$-adic cohomology). Since
$$d_2: E_2^{pq}to E_2^{p+2,q-1}$$
maps a structure of one weight to another it must vanish. Similarly for higher differentials.



If $f$ is proper but not smooth, the decomposition theorem shows that $mathbb{R} f_*mathbb{Q}$ decomposes into sum of translates of intersection cohomology complexes.
This follows from more sophisticated purity arguments (either in the $ell$-adic setting as in BBD, or the Hodge theoretic setting in Saito's work).
There is also a newer proof due to de Cataldo and Migliorini which seems a bit more geometric.



I have been working through some of this stuff slowly. So I may have more to say in a few months time. Rather than updating this post, it may be more efficient for the people
interested to check
here periodically.

Monday, 22 July 2013

exoplanet - Sedna, VP113 and the likelihood of the PX/Tyche/Thelistos hypotheses

Crosslisted question from http://physics.stackexchange.com/questions/131876/sedna-vp113-and-the-likelihood-of-the-px-tyche-thelistos-hypotheses



The recent discoveries in exoplanetary science (specially those findings of planets orbiting far far away from its parent star/stars) raise questions about how much we know about the (true, AIU definition) number of planets in our solar system.



Question: does the discovery of VP113 and Sedna increase the odds of some kind of planetary object in the edge of the known solar system (Oort's cloud)? How could we find it if it is very cold and dark? I mean, how many Sedna or VP113 objects we would need in order to guess its position and location?



Reading about this topic, I don't know how to quantify the odds of finding



PX(Planet X): An Earth sized body perturbing the orbits of Sedna, VP113 and other similar objects.



Thelistos: A super-Earth (about 5-10 Earth masses times a O(1) factor)



Tyche: (Now disfavoured by WISE data, I think it can not be excluded if it is extremely cold and "dark") Gas giant (Neptune-like, Saturn-like, etc)



Nemesis: A "dark star"/brown dwarf object.

Sunday, 21 July 2013

ag.algebraic geometry - How to compute the Picard-Lefschetz monodromy matrix of a non-semistable fiber?

Let $f:Xto B$ be a family of curves of genus $g$ over a smooth curve $B$. Let $F_0$ be a singular fiber.



If $F_0$ is a semistable fiber, the monodromy matrix can be gotten by the classical Picard-Lefschetz formula.



If $F_0$ is non-semistable, I don't know how to compute its monodromy matrix. For example, in Namikawa and Ueno's paper[1], they can compute the Picard-Lefschetz monodromy matrix for each type of singular fiber of genus 2. It's not clear to me how they did that.



[1] Namikawa, Y. and Ueno, K., The complete classification of fibres in pencils of curves of genus two, Manuscripta math., Vol. 9 (1973), 143-186.

ag.algebraic geometry - Precise definition of a scheme (Key question: How to define an open subfunctor without resorting to classical scheme theory)

Speculation and background



Let $mathcal{C}:=CRing^{op}_{Zariski}$, the affine Zariski site. Consider the category of sheaves, $Sh(mathcal{C})$.



According to nLab, schemes are those sheaves that "have a cover by Zariski-open immersions of affine schemes in the category of presheaves over Aff."



In SGA 4.1.ii.5 Grothendieck defines a further topology on $Sh(mathcal{C})$ using a "familles couvrantes", which are families of morphisms ${U_i to X}$ such that the induced map $coprod U_i to X$ is an epimorphism. Further, he gives another definition. A family of morphisms ${U_i to X}$ is called "bicouvrante" if it is a "famille couvrante" and the map $coprod U_i to coprod U_i times_X coprod U_i$ is an epimorphism. [Note: This is given for a general category of sheaves on a site, not sheaves on our affine Zariski site.]



Speculation: I assume that the nLab definition means that we have a (bi)covering family of open immersions of representables, but as it stands, we do not have a sufficiently good definition of an open immersion, or equivalently, open subfunctor.



It seems like the notion of a bicovering family is very important, because this is precisely the condition we require on algebraic spaces (if we replace our covering morphisms with etale surjective morphisms in a smart way and require that our cover be comprised of representables).



Questions



What does "open immersion" mean precisely in categorical langauge? How do we define a scheme precisely in our language of sheaves and grothendieck topologies? Preferably, this answer should not depend on our base site. The notion of an open immersion should be a notion that we have in any category of sheaves on any site.



Eisenbud and Harris fail to answer this question for the following reason: they rely on classical scheme theory for their definition of an open subfunctor (same thing as an open immersion). If we wish to construct our theory of schemes with no logical prerequisites, this is circular.



Once we have this definition, do we require our covering family of open immersions to be a "covering family" or a "bicovering family"?



Further, how can we exhibit, in precise functor of points language, the definition of an algebraic space?



This last question should be a natural consequence of the previous questions provided they are answered in sufficient generality.

distances - What qualifies as a local star?

This is too vague a term to be useful, because the Sun does not have any kind of sphere of influence (beyond the solar system) and nor do the nearby stars affect us with their light or gravity (Oort cloud perturbations aside).



Instead it would normally refer to the stellar population in the solar vicinity. This is a mixed bag of objects, with few shared characteristics. For example, objects within 10 light years of the Sun now, will not be so in 10 million years time.



A working definition might be those stars that are close enough to the Sun that we think the census is complete to that distance. Unfortunately, incompleteness remains in all such samples at low masses.



The RECONS programme has been working hard for the last decade to complete the census of objects that are within 10pc (33 light years).



The definition of a distant star is even more vague - all stars are distant. The Sun is 5 orders of magnitude closer than the next stars, so if anything, that is the only natural division between local and distant!

Saturday, 20 July 2013

coordinate - Why objects are uniquely defined by their right ascention and declination?

I've just started studiyng Astronomy.



I understood how I would measure Right Ascention and Declination: for the first one I'd trace an Hour cirlce to the Celestial Equator, and then, compute the angle from this point and the Vernal Equinox; while the second one is the angle subtended by the object and the intersection of the Hour Circle with the Celestial Equator.



So, during the night, stars will change their Right Ascension value? Does this means this coordinate system is not fixed with respect to the stars?



If so, why people store the RA/Dec to find objects?
For instance, I know the RA/Dec of the NGC 5585 Galaxy ( http://en.wikipedia.org/wiki/NGC_5585 ), but how can I use this information, if the RA changes during the night?

the sun - Does the sun have a protective shield

You seriously cannot expect the sun to have a layer that would contain some of its harmful radiation. Sun consists of a plasma and is not solid. Learn more about its structure here. Magnetic field is a kind of layer you might think of apart from all gases and energy it radiates. But that is harmful and not protective for us. It is highly unlike earth which has a solid crust. Any layer around sun wouldn't actually protect us as it would have to contain a huge amount of energy and that would make it more unstable.



Earth just happens to be at the right distance from the sun that was conducive enough to harbor a suitable atmosphere with just the right combination of the right elements and compounds (especially oxygen for breathing, ozone for protection and nitrogen for maintaining atmospheric balance). This is why life has been possible here.
Earth also happens to have a magnetic field strong enough to protect us from the solar wind, as described in details here. It just diverts the harmful energy away from us. Containment of huge amount of energy is the major concern here and nobody is doing that.



So life has been possible only because the way earth has been created, well of course all energy comes from the sun. But the sun just provides energy and doesn't save us from itself.

ag.algebraic geometry - Resolution of Singularities, Nature of

Hironaka's theorem guarantees an existence of resolution of singularities in characteristic 0. If I am not wrong, it also guarantees (or at least some other result does), that if the resolution is a singular point, one can get the "Exceptional Fiber" to be a simple normal crossing divisor. Very likely, if the singular locus is of higher dimension, then too one can get the "Exceptional Fiber" to be a simple normal crossing divisor.



However, if the nature of singularity varies along the singular locus, (perhaps) one cannot expect the dimensions of the fibers at each point to be constant in the given resolution.



What should be the most general result known in this direction? Can one expect, for example, a stratification such that inverse image of each strata, is "like simple normal crossing" (eg smooth irreducible components, as well as all k-fold intersections being smooth)?

Friday, 19 July 2013

cv.complex variables - An integral arising in statistics(2)

The integral I am interested in is:
$$t(x)=int_{-K}^{K}frac{exp(ixy)}{1+y^{2q}}dy$$



$K<infty$, q natural number



For q=1 one can use contour integration.
So for K>1 we have :



$$pi/2-int_{Arc}frac{exp(ixy)}{1+y^{2}}dy $$
Where Arc has radius $K$



Is it correct that for K<1 this integral is:
$$-int_{Arc}frac{exp(ixy)}{1+y^{2}}dy ?$$



What about K=1?

stellar evolution - Conflicting information about the star Delta Pavonis?

To answer a question, we do need the source of information (wikipedia just isn't good enough).



According to the SIMBAD database, delta Pavonis is a G8 subgiant, with a temperature of 5512K and a $log g$ of 4.23 (in cgs units) (Gray et al. 2006), which looks correct for a subgiant classification. The metallicity is given as $+0.13$ in the same source.



There are a large number of other spectroscopic determinations of the star's parameters listed in the SIMBAD measurements. All seem to agree on the subgiant nature, but for example Bensby et al. (2014) makes the star a little warmer (5635 K) and more metal rich (+0.37).



So what appears to be at odds with this is the age/mass combination that you quote.



The wikipedia entry and information you quote appears to be taking information from Takeda et al. (2007). They estimate the age and mass of the star from the temperature and gravity using a Bayesian fitting method and the YREC stellar evolution models.



If this is really important to you, then you need to read the Takeda et al. paper. They show that the posterior probability distribution of age and mass can have multiple peaks. In the case of delta Pavonis it appears that wikipedia has got things wrong (what a surprise). The age that is quoted is actually for a secondary peak in the posterior probability distribution. The secondary peak in the mass distribution is for $M= 1.101M_{odot}$. I think that age, mass pair makes much more sense.



They do not provide detailed information/discussion on individual cases. Like you, I cannot see how they arrive at 1 solar mass for a star of any age, unless they have picked up a pre main sequence solution in the HR diagram - for which the age corresponding to 1 solar mass would be very small. However, they claim that their models start at the ZAMS. The fact that they list no primary peak in the age posterior pdf suggest to me that the pdf look like a declining function from zero (the ZAMS), with a secondary peak at the 7Gyr (quoted by wikipedia). This then suggests that this secondary solution is also not a brilliant fit and that the fitting process would really have liked to make this younger than ZAMS - i.e. a PMS star (which it isn't). In turn then, this may suggests that the gravity used by Takeda et al. might be too low.



Takeda et al. used spectroscopically derived parameters of temperature 5590 K, $log g= 4.31$ and $[M/H]=+0.26$ from Valenti & Fischer (2005). These authors also did their own isochrone fits to these parameters finding $M=1.045 pm 0.058 M_{odot}$ and an age of 5.3-6.8 Gyr. They say that the mass at the best-fitting isochronal age is $1.07M_{odot}$.



I think part of what is puzzling you is that you have to remember that having a metal content that is twice the Sun's does affect the evolutionary tracks considerably. It is not that the evolutionary timescales are massively different, but the effective temperature of the star is changed considerably for a given mass and age.

star - Does the earth's atmosphere act as a spherical lens and refract light from space?


If so by how much does it "spoil" the view of stars and galaxies etc.




There are several very different issues related to your question. Let's tackle them one by one.




Atmospheric refraction



Yes, the Earth's atmosphere refracts light. One notable effect is that objects near horizon appear higher than they should be.



enter image description here



Therefore, the Sun (or any other object) rises earlier than it should, and sets later.



http://en.wikipedia.org/wiki/Atmospheric_refraction



It affects astrometry, because it messes up the coordinates of objects near the horizon. Other than that, it does not cause great harm to astronomy.




And then there are several effects harmful to astronomy:



1. Seeing



What we call in astronomy "seeing" is just air turbulence. Because air moves around due to heat, it distorts the image continuously. This is how the Moon looks like in a telescope - notice the shifting around due to seeing (animated GIF, reload if it's not moving, or open in new tab):



enter image description here



If we lived in a vacuum, that image would be static, and perfectly clear. But turbulence causes the shifting. Also, it causes a blur - the image is less sharp than it could be. It's also what causes the stars to "twinkle".



It's basically the same phenomenon that you see in summer along the hot walls of a house under sunlight - the same shimmering effect of the image seen through the hot rising air along the walls.



For amateur astronomers, seeing affects high-resolution targets such as planets, the Moon, double stars. It does not affect targets that are perceived to be "low-resolution" in an amateur telescope: galaxies, nebulae, star clusters.



More reading:



http://en.wikipedia.org/wiki/Astronomical_seeing



Since seeing depends on weather, it can be predicted to some extent:



http://cleardarksky.com/csk/



2. Light pollution



Atmosphere also scatters and reflects light back to Earth. All the city lights around you are scattered by haze and dust in the air, and some of that light goes back down into your eye, creating sort of a "light fog".



As a result, objects that are not too bright are masked by light pollution and become less visible, or invisible altogether.



enter image description here



A bigger telescope will help you pierce through the haze of light pollution, but only to some extent. The best way to see faint objects is to put the scope in the car and drive far away from the city.



Here's a light pollution map:



http://www.jshine.net/astronomy/dark_sky/



For amateur astronomers, light pollution affects the "faint fuzzies": galaxies, nebulae, star clusters. It has no effect whatsoever on bright objects such as: planets, the Moon, many double stars.




Other ways the atmosphere can negatively impact observations:



  • clouds - this is an obvious one


  • transparency - different from clouds, it includes haze, dust, particulate matter, water vapor. It affects low contrast objects such as galaxies.


astrophysics - Is it possible that a ultra-large portion of the space we live in is already inside a black hole? How could we refute this?

This isn't anything near a proof that we don't live inside a black hole, but it's a bundle of evidence that certainly goes against it, and that the Great Attractor is not, in fact, the singularity.



First off: Expansion of the universe.



As you no doubt know, the universe is expanding. In fact, the expansion is accelerating. Do black holes expand? Yes. As they suck in more matter, they can get bigger. But if that was the case, we should notice more matter coming into the universe (well, I suppose it could be from outside the visible universe, but we should still see lots of matter coming towards us). Also, the universe's expansion is driven by dark energy, and it pushes (on a large scale) everything away from each other. In an expanding black hole, there would be no reason for the matter inside the event horizon to move away from each other; only the event horizon expands.



You also made a good point in a comment below about Hawking radiation. Eventually, in the far future, when there is nothing left in the universe but black holes, black holes will evaporate via Hawking radiation (okay, they do that now, nut they can still take in more matter). If our universe is a black hole, it should then contract. But we see no reason why it should. In fact, the theory that predicts the universe's eventual collapse into a singularity (i.e. the opposite of the Big Bang), the Big Crunch theory, predicts that the universe's contraction will match its expansion. Contraction due to Hawking radiation wouldn't necessarily mirror the black hole's growth. Also, the Big Crunch theory is supported only by a minority of scientists due to the evidence against it.



Second off: The motion of galaxies by the Great Attractor.



First off, see When will the Milky Way "arrive" at the Great Attractor, and what all happen then? (and not just my answer! I dearly wish @LCD3 would expand his/her comment into an answer!). The general gist of things there, as pertains to your remark about the Great Attractor, is that the galaxies aren't all moving towards it. There are doubts (see the papers I mentioned) that the galaxies previously thought to be moving towards it are, in fact, moving towards more distant objects - other superclusters. If the Great Attractor was indeed the singularity, a) all the galaxies in the universe should be accelerating towards it, which is not the case, and b) we should be moving directly towards it, and due to its gravity, not that of the superclusters beyond.



Like I said in my comments, I don't quite think your last part relates to your first part, but I'll try to address it. First, I'm not sure where you got your sources, but I can say that we don't really know what goes on inside a black hole, and I don't now how someone got that density figure (I am, of course, not the authority on black holes - see @JohnRennie for that, on Physics SE, and I could be wrong about this). The density inside the event horizon would not be very thin, however, in regions where there is lots of matter. For example, in a black hole with an accretion disk, the material that is being absorbed might not have that low a density. Also, a large volume of that density would not necessarily collapse to become a black hole, because it would not be compact enough.



I hope this helps.

Thursday, 18 July 2013

Need Algorithm to generate all permutations of the integers 1 to n

Knuth's The Art of Computer Programming, Volume 4, Fascicle 2: Generating all Tuples and Permutations gives efficient (non-recursive) solutions to this and many other combinatorial enumeration problems.



Section 7.2.1.2 contains 36 pages of material devoted to precisely the question you ask.

How does categoricity interact with the underlying set theory?

Here's the setup: you have a first-order theory T, in a countable language L for simplicity. Let k be a cardinal and suppose T is k-categorical. This means that, for any two models



M,N |= T



of cardinality k, there is an isomorphism f : M --> N.



Supposing all this happens inside of ZFC, let's say I change the underlying model of ZFC, e.g by restricting to the constructible sets, or by forcing new sets in. I would like to understand what happens to the k-categoricity of T.



I'll assume the set theory doesn't change so drastically that we lose L or T. Then, a priori, a bunch of things may happen:



(i) We may lose all isomorphisms between a pair of models M,N of cardinality k;
(ii) Some models that used to be of cardinality k may no longer have bijections with k;
(iii) k may become a different cardinal, meaning new cardinals may appear below it, or others may disappear by the introduction of new bijections;
(iv) some models M, or k itself, may disappear as sets, leading to a new set being seen as "the new k".



Overall, nearly every aspect of the phrase "T is k-categorical" may be affected. How likely is it to still be true? Do some among (i)-(iv) not matter, or is there some cancellation of effects? (Say, maybe all isomorphisms M-->N disappear, but so do all bijections between N and k?)

Wednesday, 17 July 2013

ac.commutative algebra - Primary decomposition of zero-dimensional modules

If $M$ is finitely generated and has $0$-dimensional support, then $M$ is in fact supported at finitely many maximal ideals (a $0$-dimensional closed subset of the Spec of a Noetherian ring is just a finite union of maximal ideals), and one has the isomorphism
$M = oplus_{mathfrak p} M_{mathfrak p} = oplus_{mathfrak p} M[mathfrak p^{infty}].$



The first equality just specifies the fact that, since the quasi-coherent sheaf on Spec $A$
attached to $M$ is supported at finitely many closed points, it is a sky-scraper sheaf at these points, and its global sections are just the sum of its stalks at those finitely many points. (In particular, at all ${mathfrak p}$ not in the support of $M$, the localization
$M_{mathfrak p}$ vanishes, and so does not contribute to the direct sum, so really the direct sum is just over the finitely many points in the support.)
To see the second equality (which is the crux of the question as far as I can tell), note
that we reduce to the local case: $M_{mathfrak p}$ is finitely generated over $A_{mathfrak p}$, and has support equal to the closed point ${mathfrak p}$ of Spec $A_{mathfrak p}$.
A consideration of the very definition of support will now show that each element of $M_{mathfrak p}$ is
annihilated by some power of ${mathfrak p}$, and hence that $M_{mathfrak p} =
M[mathfrak p^{infty}]$.



From this decomposition everything else is easy to work out: for example,
$M(mathfrak p)$ (which if I understand correctly is defined to be the kernel of
the map to $M_{mathfrak p}$) is precisely $oplus_{mathfrak q neq mathfrak p}
M_{mathfrak q} = oplus_{mathfrak q neq mathfrak p} M[mathfrak q^{infty}].$



In particular, if we want to isolate $M[mathfrak q^{infty}],$ we just intersect
the $M(mathfrak p)$ for all $mathfrak p neq mathfrak q$. This explains the
formula in Arminius's answer.



In short: rather than having to memorize or quote technical results from Eisenbud (or elsewhere),
one can use geometric reasoning on Spec $A$ to answer such questions. (All the technical reasoning has been embedded in one step: the proof of the correspondence between $A$-modules and quasi-coherent sheaves on Spec $A$.)

ct.category theory - Semiadditivity and dualizability of 2

Short version: Let (C, ⊗, 1) be a locally presentable closed symmetric monoidal category with a zero object, and write 2 = 1 ∐ 1. Suppose the object 2 has a dual. Does it follow that C is a category with biproducts?



Longer version, with motivation: Let (C, ⊗, 1) be a locally presentable closed symmetric monoidal category. If you don't know what "locally presentable" means, you can replace these conditions with "complete and cocomplete symmetric monoidal category in which ⊗ commutes with colimits in each variable". Familiar examples include (Set, ×, •), (Set*, ∧, S0) (the category of pointed sets with the smash product), and (Ab, ⊗, ℤ). Any such category C has a unique "unit" functor FC : Set → C preserving colimits and the unit object: the set S is sent to the coproduct in C of S copies of 1. For a nonnegative integer n, let me also write n for the image under this functor of the n-element set. For instance, 0 represents the initial object of C.



A dual for an object X of C is another object X* together with maps 1 → X ⊗ X* and X* ⊗ X → 1 which satisfy the triangular identities; see wikipedia for more details. The data of X* together with these maps is unique up to unique isomorphism if it exists, so it makes sense to ask whether an object has a dual or not.



I'm interested in the relationship between which objects in the image of FC have duals and the existence of more familiar structures on C. In our examples,



  • C = Set: Only 1 has a dual.

  • C = Set*: Only 1 and 0 = • have duals.

  • C = Ab: n has a dual for any nonnegative integer n.

It's easy to show that 1 is always its own dual, and slightly less trivially, that 0 has a dual iff 0 is also a final object, i.e., C has a zero object, or equivalently C is enriched in Set*. Moreover, if C is semiadditive, i.e., enriched in commutative monoids, or equivalently has biproducts, then n has a dual (in fact, n is its own dual) for every nonnegative integer n. Conversely, if 0 has a dual, so that C is pointed, and 2 also has a dual, then there is a canonical map 2 = 1 ∐ 1 → 1 × 1 = 2*. My question, then, is: is this map is always an isomorphism? Or, could it happen that 2* exists but is not isomorphic to 2 via this map?

gr.group theory - Algebraic groups of relative rank 1

Fleshing out the Galois cohomology approach suggested by Brian Conrad leads to a clean answer for all $n$, for all fields $K$ of characteristic not $2$, and for all nondegenerate quadratic forms of rank $n$ over $K$. The answer is exactly what moonface claimed:




Given quadratic forms $q$ and $q'$, the algebraic groups $operatorname{SO}(q)$ and $operatorname{SO}(q')$ are isomorphic if and only if $q$ and $q'$ are similar, i.e., $q$ is equivalent to $lambda q'$ for some $lambda in K^times$.




The key observation is that the homomorphism from $operatorname{O}_n$ to the automorphism group scheme of $operatorname{SO}_n$ giving the conjugation action is surjective for all $n$, and the kernel is $lbrace pm 1 rbrace$ for $n>2$. Then for $n>2$, one has the exact sequence of pointed sets
$$ H^1(K,lbrace pm 1 rbrace) to H^1(K,operatorname{O}_n) to H^1(K,operatorname{bf Aut} operatorname{SO}_n).$$
The first term is $K^times/K^{times 2}$, the second term is the set of equivalence classes of nondegenerate rank $n$ quadratic forms, and the third term is the set of isomorphism classes of $K$-forms of $operatorname{SO}_n$. The sequence (and its twists - remember that we are dealing with nonabelian cohomology) shows that two quadratic forms give rise to the same $K$-form of $operatorname{SO}_n$ if and only if they are similar.



If $n=2$, a similar argument applies, though one can also see everything explicitly: every rank $2$ quadratic form is similar to $x^2-dy^2$ for some $d in K^times$, and the corresponding $operatorname{SO}(q)$ is the "Pell equation torus" $x^2-dy^2=1$; both depend just on the image of $d$ in $K^times/K^{times 2}$. I leave the cases $n=1$ and $n=0$ to those who like to think about such things.



More details: When $n$ is odd, we have $operatorname{O}_n = lbrace pm 1 rbrace cdot operatorname{SO}_n$, and all automorphisms of $operatorname{SO}_n$ are inner.



When $n=2m$ for some $m ge 2$, we have $-1 in operatorname{SO}_n$, so conjugation by an element of $operatorname{O}_n$ outside $operatorname{SO}_n$ gives an outer automorphism of $operatorname{SO}_n$; correspondingly, the $D_m$ Dynkin diagram (interpreted appropriately for small $m$) has an involution.



Why does the rotation of the $D_4$ Dynkin diagram not give an extra outer automorphism when $n=8$? The covering group between the simply connected form $operatorname{Spin}_8$ and the adjoint form $operatorname{PSO}_8$ is $(mathbb{Z}/2mathbb{Z})^2$, so there are three intermediate covers, one of which is $operatorname{SO}_8$, but the rotation permutes these three.



(Thanks to my colleague David Vogan for discussing this with me.)

Is there an analog of class field theory over an arbitrary infinite field of algebraic numbers?

Recently, I found a paper by Schilling http://www.jstor.org/pss/2371426, which mentions that for certain infinite field of algebraic numbers there is an analog of class field theory. By infinite field of algebraic number we mean an infinite extension of $mathbb{Q}$. The paper cite a previous paper by Moriya which was the origin of the idea. I could not read the later since it is in German. Since the first paper is quite old (1937), I believe there must have been a lot of development in the mean time.



My question: Do we have an analog of class field theory over an arbitrary infinite field of algebraic number?



An even more general question: Do we have an analog of class field theory over an arbitrary field. This seems a bit greedy, but since we know that an algebraic closed field of characteristic 0 is totally characterized by its trancendence degree so if the answer to the previous question is positive the answer to this is perhaps not too far. Am I making sense?

reference request - Doing geometry using Feynman Path Integral?

"Feynman Path Integral can be used to compute geometric invariants of a space."



There several different approaches doing this. Let me try to explain one of them, but remember it is not the only.



The point is that first you should omit the world "Feynman" !
Just integrals are useful to compute geometric invariants - for example Gauss-Bonnet theorem expresses the Euler characteristics as integral over manifold.
Word "Feynman" appears when we consider infinite-dimensional manifolds - so we need to "integrate" over infinite-dimensional spaces.
However we are NOT really interested in geometry of infinite-dimensional manifolds - we are interested in finite-dimensional manifolds.
It appears that in some situations infinite-dimensional manifolds are either contractable to finite-dim ones or there is some heuristics which relates invariants of infinite-dimensional manifolds and finite-dim. For example if you consider loop space of M, manifold itself is embeded into loops(M) as subset of constant loops. If you consider the rotations of loops - then constant loops are fixed-point of this action - so in this case manifold is inf-dim but fixed point set is finite-dim - so we considering equivariant calculations we can get the result on finite-dim results.



So the red-line is the following -



in finite-dim case you integrate closed forms on manifold - and get invariant



in Feynman setup certain integrals reminds closed forms on some inf-dim spaces (loop space or whatever) so integrating it you get invariant.



(In some situations "closed form" menas with respect to BRST differential).




The classical examples are related to Mathai-Quillen formalism and interpretation
in terms of QFT.



Let me suggest to look a
M. Blau The Mathai-Quillen Formalism and Topological Field Theory
http://arxiv.org/abs/hep-th/9203026



And cite the abstract:
"These lecture notes give an introductory account of an approach to cohomological field theory due to Atiyah and Jeffrey which is based on the construction of Gaussian shaped Thom forms by Mathai and Quillen. Topics covered are: an explanation of the Mathai-Quillen formalism for finite dimensional vector bundles; the definition of regularized Euler numbers of infinite dimensional vector bundles; interpretation of supersymmetric quantum mechanics as the regularized Euler number of loop space; the Atiyah-Jeffrey interpretation of Donaldson theory; the construction of topological gauge theories from infinite dimensional vector bundles over spaces of connections."

Tuesday, 16 July 2013

the sun - Sun's apparent motion above the arctic circle during summer solstice

Instead of walking in a circle, spin. Take a tiny step forward with your right foot, backward with your left. Then repeat. You are spinning counterclockwise when viewed from above.



Next, imagine holding a landscape photograph at arms length, and you initially facing a floor lamp. Now take that tiny step forward with your right foot, backward with your left. From your body fixed perspective, the landscape photograph doesn't move. It's the lamp that appears to be moving, and it moves to the right.



Next, imagine a helmet cam on your head that rotates to keep the floor lamp in the center of the camera image. From this perspective, it's the landscape photograph that appears to be moving, and it moves to the left.



That's what you are seeing in that youtube video.

exoplanet - Planets within 10 parsecs

I have been researching what we know, so far, about planetary systems within 10 parsecs. It is easy to find out what has been found, but much harder to find out what has been eliminated (for instance: I did read that there are no planets down to a couple of earth masses in the habitable zone of 61 Virginis). I am particularly interested in the stars that Tartar and Turnbull have listed as most likely to harbor life. Does anybody have any ideas where I can find the information I am looking for? I would also like to know how much we will be able to learn with the new generation of telescopes (ie. E-ELT using the CODEX spectrograph). I understand the logistical problem of detecting "far out" planets due to long orbits (monitoring the same star for many years) so am wondering about what we can hope for in "direct detection" over the next 40 years or so.

Monday, 15 July 2013

nt.number theory - Special values of $p$-adic $L$-functions.

This is a very naive question really, and perhaps the answer is well-known. In other words, WARNING: a non-expert writes.



My understanding is that nowadays there are conjectures which essentially predict (perhaps up to a sign) the value $L(M,n)$, where $M$ is a motive, $L$ is its $L$-function, and $n$ is an integer. My understanding of the history is that (excluding classical works on rank 1 motives from before the war) Deligne realised how to unify known results about $L$-functions of number fields and the B-SD conjecture, in his Corvallis paper, where he made predictions of $L(M,n)$, but only up to a rational number and only for $n$ critical. Then Beilinson extended these conjectures to predict $L(M,n)$ (or perhaps its leading term if there is a zero or pole) up to a rational number, and then Bloch and Kato went on to nail the rational number.



Nowadays though, many motives have $p$-adic $L$-functions (the toy examples being number fields and elliptic curves over $mathbf{Q}$, perhaps the very examples that inspired Deligne), and these $p$-adic $L$-functions typically interpolate classical $L$-functions at critical values, but the relationship between the $p$-adic and classical $L$-function is (in my mind) a lot more tenuous away from these points (although I think I have seen some formulae for $p$-adic zeta functions at $s=0$ and $s=1$ that look similar to classical formulae related arithmetic invariants of the number field).



So of course, my question is: is there a conjecture predicting the value of $L_p(M,n)$, for $n$ an integer, and $L_p$ the $p$-adic $L$-function of a motive? Of course that question barely makes sense, so here's a more concrete one: can one make a conjecture saying what $zeta_p(n)$ should be (perhaps up to an element of $mathbf{Q}^times$) for an integer $ngeq2$ and $zeta_p(s)$ the $p$-adic $zeta$-function? My understanding of Iwasawa theory is that it would only really tell you information about the places where $zeta_p(s)$ vanishes, and not about actual values---Iwasawa theory is typically only concerned with the ideal generated by the function (as far as I know). Also, as far as I know, $p$-adic $L$-functions are not expected to have functional equations, so the fact that we understand $zeta_p(s)$ for $s$ a negative integer does not, as far as I know, tell us anything about its values at positive integers.

Sunday, 14 July 2013

gr.group theory - Does a group have a unique pro-p topology?

Your question, in the finitely generated case, is sort of answered above.



In the infinitely generated case, however, this is a research question I have been looking at for the last year or so. I'm in the middle of redrafting a paper dealing with the abelian case.



If $G$ is any finitely-generated pro-$p$ group, by Serre, any pro-$p$ topology must have all finite index subgroups open, so the answer here is yes. This is not too hard to see. A result of Nikolov and Segal (2006) showed the same holds for profinite groups.



There are two different notions of unique topology you could mean here. If $f:Gto H$ is an abstract isomorphism between two pro-p groups, we have three possibilities:



(1) $f$ must be continuous.
(2) $f$ is not continuous, but $G,H$ are isomorphic as pro-$p$ groups.
(3) $G,H$ are not isomorphic as pro-$p$ groups.



Which case this is is entirely dependant on the algebraic structure of $G$.



The first case is equivalent to saying all automorphisms of $G$ are continuous. As above, any finitely-generated pro-$p$, indeed, profinite groups have this property. However not every group here must be finitely generated. By consideration of centralisers, which, as kernels of word maps, must be closed in any profinite topology, one can see that any (unrestricted Cartesian) product of finite centreless groups has this property. Moreover, considering centralisers will also give this result for Branch Groups.



If we assume the Generalised Continuum Hypothesis fails, we get uninteresting examples of non-isomorphic abstractly ismorphic pro-$p$ groups - the (unrestricted Cartesian) product of $aleph$, $beth$ copies of $C_p$ with $2^aleph=2^beth$, for example. It is more interesting to look for examples which do not depend on this. Hence it makes sense to look at countably-based profinite groups - those which are the inverse limit of a countable collection of finite groups, first.



Tyler Lawson gives a good example above of this, but it is not too hard to see that these groups are still abstractly isomorphic.



More interestingly, if $G$ is a countably-based abelian pro-$p$ group with infinite exponent torsion group, then $G$ is abstractly isomorphic to a product of cyclic $p$-groups, and there are uncountably many different pro-$p$ topologies on $G$, which give rise to uncountably many isomorphism classes of pro-$p$ groups.



My paper showing this is mathematically complete but being redrafted. A version should appear on the ArXiV in the next few days. In the meantime, I can e-mail a copy on request.

tides - What causes the antipodal bulge?

I am probably going to get slammed for this, because it violates everything we were taught about tidal forces, but the antipodal tide is caused by the centrifugal force created by the Earth's rotation about the earth/moon barycenter, not differential gravitational forces. While the moon's gravity is less on the side of the earth furthest from it, that force is still towards the moon, not away from it.

There is a good explanation of this centrifugal force explanation on NOAA's website here. I'll cite some pictures and pertinent text, but this site is a good read for anyone trying to understand the forces responsible for the Earth's tides.

Here is a diagram showing the earth and moon's movement around the system's barycenter:
enter image description here
And some of the pertinent text:

The center of revolution of this motion of the earth and moon around their common center-of-mass lies at a point approximately 1,068 miles beneath the earth's surface, on the side toward the moon, and along a line connecting the individual centers-of-mass of the earth and moon. (see G, Fig. 1) The center-of-mass of the earth describes an orbit (E1, E2, E3..) around the center-of-mass of the earth-moon system (G) just as the center-of-mass of the moon describes its own monthly orbit (M1, M2, M3..) around this same point.

  1. The Effect of Centrifugal Force. It is this little known aspect of the moon's orbital motion which is responsible for one of the two force components creating the tides. As the earth and moon whirl around this common center-of-mass, the centrifugal force produced is always directed away from the center of revolution. All points in or on the surface of the earth acting as a coherent body acquire this component of centrifugal force. And, since the center-of-mass of the earth is always on the opposite side of this common center of revolution from the position of the moon, the centrifugal force produced at any point in or on the earth will always be directed away from the moon. This fact is indicated by the common direction of the arrows (representing the centrifugal force Fc) at points A, C, and B in Fig. 1, and the thin arrows at these same points in Fig. 2.

And finally another diagram that the blockquote cites:

Moon gravitational diagram

Yes, one can probably find lots and lots of quotations declaring that the antipodal tide is caused because the moon's gravitational force is much less on the far side than on the near side, but that doesn't make it true. And I believe NOAA ought to be a pretty authoritative source. If they can't get it right...

Saturday, 13 July 2013

polynomials - Test if two curves intersect before finding roots

If you think about it, there does not seem to be a way around eliminating two variables:
$$ exists t_1, t_2, x_1(t_1)=x_2(t_2) & y_1(t_1)=y_2(t_2) $$
I.e., the $x$ and $y$ coordinates coincide, but each point is obtained at a different moment in time (or again, in terms of mobile points, the two trajectories intersect, but the two mobile points do not necessarily collide.)



Over the reals, it's the cylindrical algebraic decomposition that does that for you (explained for instance in the book by Basu-Pollack-Roy which is freely available). Any decent computer algebra system should be able to solve this.

ag.algebraic geometry - Does projectiveness descend along field extensions?

The entirety of sections 8--12 (apart from 10) of EGA IV is devoted to fleshing out in remarkably exhaustive and elegant generality the entire yoga of "spreading out and specialization" of which this question is but a special case. Highly recommended reading; it is used (implicitly, if not explicitly) all the time when people prove theorems in algebraic geometry by specialization. This includes proving results over $mathbf{C}$ by "reduction to the case of positive characteristic or finite fields" (e.g., Mori, Deligne-Illusie) as well as construction of moduli spaces of stable curves by digging out subschemes of Hilbert schemes, etc.



Here is a nifty little exercise to test one's understanding of the EGA formalism: if $X$ and $Y$ are schemes of finite type over a field $k$ and if there is an extension field $K/k$ such that there is a $K$-morphism $f:X _K rightarrow Y _K$ with any "reasonable" property $mathbf{P}$ then there is such a morphism with $K/k$ a finite extension; here, "reasonable" can be lots of things: isomorphism, surjective, open immersion, closed immersion, finite flat of degree 42, a semistable curve fibration, smooth, proper and flat with geometric fibers having 12 irreducible components which intersect according to such-and-such configuration and dimensions, and so on. The point is that the initial $f$ is certainly not descended to a finite subextension of the initial
$K/k$, and if you made the construction over such an extension and extended scalars back up to the original $K$ then it has absolutely nothing to do with the original $f$.



On the topic of specialization for morphisms, I can't resist mentioning a useful fact which is not a formal consequence of that general stuff: if $A$ and $B$ are abelian varieties over a field $k$ then there exists a finite (even separable) extension $k'/k$ such that (loosely speaking) "all homomorphisms from $A$ to $B$" are defined over $k'$. This means that if $K/k'$ is any extension field whatsoever, then every $K$-homomorphism $A_K rightarrow B_K$ is defined over $k'$. (Quick proof: the locally finite type Hom-scheme has finitely generated group of geometric points, and is unramified by functorial criterion, so it is 'etale since we are over a field.) There is nothing like this for general (even proper smooth) varieties; just think about automorphisms of projective space.

What is Event Horizon of a Black Hole?


What I mean to say is, how are the characteristics of event horizon related to Space-Time?




Formally, an event horizon is the boundary of a region of spacetime that's not in the causal past of future null infinity. In other words, the boundary of a region from which even idealized light rays cannot escape to infinity. Whenever the event horizon is smooth, it is also null hypersurface--i.e., the direction perpendicular to it is a light ray.



Because the event horizon is defined in terms of the infinite future, the definition is very non-local, and one would need to know the entire future history to be sure where the event horizon is. As such, it is only one of a half-dozen different types of horizons used to study black holes.



Though the event-horizon itself is a three-dimensional hypersurface in spacetime, it can also be viewed as an evolving two-dimensional membrane made of a viscuous electrically conducting fluid with finite temperature and entropy but zero thermal conductivity.

Thursday, 11 July 2013

orbit - Determination of orbital elements for Trans-Neptunian Objects, how?

You've asked a big question, too big perhaps for a Q&A forum such as this. Your question is the sole subject of graduate level aerospace engineering classes, e.g., University of Colorado ASEN 5070, Introduction to Statistical Orbit Determination, and is the subject of multiple graduate level texts, e.g., Statistical Orbit Determination by Bob Schutz, Byron Tapley and George H. Born. To have a chance in that class, you'll need to already be well-versed in multivariate calculus, linear algebra, probability and statistics, numerical methods, and computer programming.



That said, you might want to look at Bernstein and Khushalani (2000), "Orbit fitting and uncertainties for Kuiper belt objects," The Astronomical Journal, 120.6:3323.



The preferred approach in statistical orbit determination is to collect data over multiple orbits. That's a luxury that is not possible with Trans-Neptunian Objects that have only been seen a handful of times, and only over a small arc of the multi-hundred year long orbits of those objects.



One thing Bernstein and Khushalani did to overcome this was to realize that TNOs are nearly inertial (Newton's first law) objects. Gravitation is but a small perturbation of inertial behavior at such distances. Another thing they did was to take advantage of the fact that for observations separated by a short span of time (e.g., a day or two), almost all of the apparent motion is due to the Earth rather than proper motion of the target object. This gives a good estimate of the distance to the target.



Their approach involves doing some of the regression in Cartesian space, with gravitation being a small perturbation, and then switching to orbital element space to complete the regression. Along the way, they worry about whether they have enough information to do the full orbital element space regression.

Wednesday, 10 July 2013

atmosphere - Can we hear something on Venus, Mars and Titan?

Just as an addition to the previous answer:



Some spacecraft that landed on other cellestial objects carried microphones to record what can be heard.



For example this video http://youtu.be/36ffV-CI3Mo contains a sound clip recorded by the Huygens probe during its descent on the moon Titan.



As far as I remember there are similar recordings from Venus, as short a time as some probe can survive on this beautiful hell hole :-)



So yes, there is definitely sound on other planets if they have sufficiently dense athmospheres.

Tuesday, 9 July 2013

nt.number theory - What's the status of the following relationship between Ramanujan's $tau$ function and the simple Lie algebras?

Qiaochu asked this in the comments to this question. Since this is really his question, not mine, I will make this one Community Wiki. In MR0522147, Dyson mentions the generating function $tau(n)$ given by
$$ sum_{n=1}^infty tau(n),x^n = xprod_{m=1}^infty (1 - x^m)^{24} = eta(x)^{24}, $$
which is apparently of interest to the number theorists ($eta$ is Dedekind's function). He mentions the following formula for $tau$:
$$tau(n) = frac{1}{1!,2!,3!,4!} sum prod_{1 leq i < j leq 5} (x_i - x_j)$$
where the sum ranges over $5$-tuples $(x_1,dots,x_5)$ of integers satisfying $x_i equiv i mod 5$, $sum x_i = 0$, and $sum x_i^2 = 10n$. Apparently, the $5$ and $10$ are because this formula comes from some identity of $eta(x)^{10}$. Dyson mentions that there are similar formulas coming from identities with $eta(x)^d$ when $d$ is on the list $d = 3, 8, 10, 14,15, 21, 24, 26, 28, 35, 36, dots$. The list is exactly the dimensions of the simple Lie algebras, except for the number $26$, which doesn't have a good explanation. The explanation of the others is in I. G. Macdonald, Affine root systems and Dedekind's $eta$-function, Invent. Math. 15 (1972), 91--143, MR0357528, and the reviewer at MathSciNet also mentions that the explanation for $d=26$ is lacking.



So: in the last almost-40 years, has the $d=26$ case explained?

gn.general topology - The difference between a sequential space and a space with countable tightness

Just a partial answer.



For $beta mathbb{N}$ (the set of all ultrafilters on $mathbb{N}$ with the Stone topology) it is not hard to see that a sequence converges iff it is eventually constant. Hence any subset of $beta mathbb{N}$ is sequentially open -- and of course, $beta mathbb{N}$ is not discrete, so it cannot be sequential. Similarly, for the ultrafilters on $mathbb{R}$.



If I recall correctly, this 'trivial sequential convergence' holds in all extremally disconnected spaces -- this should be an exercise in the book 'Rings of continuous functions' by Gillman and Jerison (there is also a PDF/TeX-file with all exercise solutions freely available on the web somewhere).



Also, I could be wrong, but I think $beta mathbb{N}$ is not countably tight since its remainder is not (since there exist weak P-points). Maybe somebody else can confirm or reject.

ag.algebraic geometry - When does the global sections of a prescheme X over an other S equals those of S?n

Obligatory tautological answer: since $f$ includes the data of $theta$, it's necessary and sufficient to require that $theta(S)$ is injective (resp. surjective).



Since you're trying to extract information about $theta(S)$, my guess is that you actually want conditions on $varphi$, $S$, and/or $X$, which don't specifically refer to $theta$.



Injectivity



Suppose $I$ is the kernel of $theta(S)$. Then the morphism $Xto Spec(O_S(S))$ factors through the closed subscheme $Spec(O_S(S)/I)$. Since $Xto S$ is surjective, this closed subscheme must contain the image of $S$ (set theoretically). The closure of the image of $S$ in $Spec(O_S(S))$ is all of $Spec(O_S(S))$, so $Spec(O_S(S)/I)$ must be set-theoretically equal to $Spec(O_S(S))$, so $I$ is contained in every prime of $O_S(S)$, so it is in the nilradical.




In particular, if $S$ is reduced, $theta(S)$ is injective.




Note that we didn't actually need $varphi$ surjective, just that the closure of the image is all of $S$. Since composing $Xto S$ with a nilpotent thickening of $S$ doesn't change $X$ or $phi$ at all, I imagine you can't get a better condition than this without saying something directly about $theta$.



If $S$ is quasi-compact and
its image in $Spec(O_S(S))$ is contained in the complement of some basic open neighborhood $D(f)$, then $fin O_S(S)$ vanishes at every point of $S$, so $f$ is nilpotent on any affine open subscheme of $S$. Since $S$ is quasi-compact, there is a single $n$ such that $f^n$ is identically zero on $S$. Since $f$ is nilpotent, $D(f)=D(f^n)subseteq Spec(O_S(S))$ is empty. So the image of $S$ is not in the complement of any non-empty open subset of $Spec(O_S(S))$.



There must be a way to show this without assuming quasi-compactness of $S$, but I don't see it right now. Suppose $Sto Spec(O_S(S))$ set theoretically factors through a closed subset $Z$. Why must there be a closed subscheme structure on $Z$ so that $S$ factors scheme theoretically through $Z$? A scheme theoretic factorization of $S$ through $Spec(O_S(S)/I)$ amounts to a factorization of the identity map $O_S(S)to O_S(S)$ through the quotient $O_S(S)to O_S(S)/I$, which cannot happen unless $I=0$.



Surjectivity



I can't think of a good condition to ensure surjectivity that isn't essentially tautological.



There's no condition you can put on $varphi$ or the underlying topological space of $X$. If $S=Spec(k)$ with $k$ a field, you could take $X$ to be a projective or affine curve over $S$. These two curves are homeomorphic, yet you have a surjection in one case but not the other.



A necessary but insufficient condition for isomorphism is that $S$ have as many connected components as $X$, since connected components correspond to irreducible idempotents in the ring of regular functions.



Note that if $X$ and $S$ are arbitrary projective schemes over a field, then any morphism between them induces an isomorphism on global regular functions. Any Stein morphism $fcolon Xto S$ obviously makes $theta(S)$ an isomorphism. Is there an example of a surjective non-Stein $f$ which makes $theta(S)$ an isomorphism?

Monday, 8 July 2013

ag.algebraic geometry - Trigonal loci in Teichmueller spaces

The argument I gave in that thread extends to a pretty wide class of surfaces with various symmetries. It's the same argument but there's some facts that are true for the hyperelliptic curves that are a little more fussy for curves with other symmetries. The line of reasoning I think is this:



Let $G$ be a finite subgroup of the diffeomorphism group of a surface $Sigma_g$. Let $N(G)$ be the normalizer of $G$ in $Diff(Sigma_g)$. So in most situations there's a short exact sequence $0 to G to pi_0 N(G) to pi_0 Diff(Sigma_g / G) to 0$ (Birman-Hilden) where we're thinking of $Sigma_g /G$ as an orbifold. Further there's a map $pi_0 N(G) to pi_0 Diff(Sigma_g)$. Provided the image of this map is the normalizer of $G$ in $pi_0 Diff(Sigma_g)$ (which according to Birman-Hilden is a pretty generic list of cases), then I think the argument goes through.



The Birman-Hilden reference. The case you're interested in is (?I think?) when $Sigma_g / G$ is an orbifold whose underlying topological space is a sphere.



This is just off the top of my head and I'm likely glossing over some important details. I'll be happy to try and clarify.



So in general the lift of your moduli space is disconnected and its components are indexed by the cosets of the group $pi_0 N(G)$ in the mapping class group. $pi_0 N(G)$ is the hyperelliptic group in the case that $G = mathbb Z_2$ is the hyperelliptic involution of your surface.

latex - Periods and commas in mathematical writing

Displayed formulas can serve two roles in a math paper: as abbreviations for text that would otherwise be unreadable, and as figures (or illustrations) that are referred to by the text but are not part of it grammatically. My opinion is that in the former case they should be punctuated, but in the latter they should not.



Here are some examples.



1) If $x$ and $y$ are the coordinates of a point on a circle of radius $r$ then



$x^2 + y^2 = r^2$.



2) The coordinates of a point of a circle of radius $r$ satisfy the following equation.



$x^2 + y^2 = r^2$



3) The following diagram commutes.



(diagram without any punctuation)



I think these examples demonstrate the necessity of distinguishing the two roles a displayed equation can play. As Simon already pointed out above, there is no reasonable place to put a punctuation mark in a commutative diagram, presumably because a commutative diagram can't be read aloud. On the other hand, it's difficult to view the sentence in the first example as complete without a period at the end of the equation.



I suggest the following rule of thumb: if the formula can be removed from the text without breaking the flow of a sentence, then it does not need to punctuated. Otherwise, it should be punctuated as it would be if the symbols were expanded into words.




Many authors use a colon where I used a period in the second example and follow the equation with a period.



2') The coordinates of a point on circle of radius $r$ satisfy the following equation:



$x^2 + y^2 = r^2$.



I don't consider this incorrect, but I do consider it a completely different sentence from 2).

Are mass extinction events more likely during meteor showers / passing through comet debris?

This recent paper by Napier et al. indeed concludes that centaur comets break up into many pieces large enough to cause mass extinction events on Earth. Since objects orbit in the same way regardless of mass, I suppose that dangerously big comet debris are more common in meteor streams, and that major impacts are more common during meteor showers.



Small meteoroidal fragments do have their trajectories changed by Solar heating and the Solar wind, but since Earth crossing centaur fragments seem to be cleared out within only thousands of years, there should be no major difference between the large and tiny objects' orbits.



It also seems as if mass extinctions might be caused by meteor showers depositing dust particles in the atmosphere. Suggested by Klekociuk et al.

Saturday, 6 July 2013

gn.general topology - Cryptomorphisms

The phenomenon that I think you have in mind has a name: cryptomorphism. I learned the name from the writings of Gian-Carlo Rota; Rota's favorite example was indeed matroids. Gerald Edgar informs me that the name is due to Garrett Birkhoff.



I think modern mathematics is replete with cryptomorphisms. In my class today, I presented the "Omnibus Hensel's Lemma". Part a) was: the following five conditions on a valued field are all equivalent. Part b) was: complete fields satisfy these equivalent properties. There are lots more equivalent conditions than the five I listed: see



An unfamiliar (to me) form of Hensel's Lemma



and especially Franz Lemmermeyer's answer for further characterizations.



I would say that the existence of cryptomorphisms is a sign of the richness and naturality of a mathematical concept -- it means that it has an existence which is independent of any particular way of thinking about it -- but that on the other hand the existence of not obviously equivalent cryptomorphisms tends to make things more complicated, not easier: you have to learn several different languages at once. For instance, the origin of the question I cited above was the fact that in Tuesday's class I st*p*dly chose the wrong form of Hensel's Lemma to use to try to deduce yet another version of Hensel's Lemma: it didn't work! Since we are finite, temporal beings, we often settle for learning only some of the languages, and this can make it harder for us to understand each other and also steer us away from problems that are more naturally phrased and attacked via the languages in which we are not fluent. Some further examples:



I think that the first (i.e., most elementary) serious instance of cryptomorphism is the determinant. Even the Laplace expansion definition of the determinant gives you something like $n$ double factorial different ways to compute it; the fact that these different computations are not obviously equivalent is certainly a source of consternation for linear algebra students. To say nothing of the various different ways we want students to think about determinants. It is "just" the signed change of volume of a linear transformation in Euclidean space (and the determinant over a general commutative ring can be reduced to this case). And it is "just" the induced scaling factor on the top exterior power. And it is "just" the unique scalar $alpha(A)$ which makes the adjugate equation $A*operatorname{adj}(A) = alpha(A) I_n$ hold. And so forth. You have to be fairly mathematically sophisticated to understand all these things.



Other examples:



Nets versus filters for convergence of topological spaces. Most standard texts choose one and briefly allude to the other. As G. Laison has pointed out, this is a disservice to students: if you want to do functional analysis (or read works by American mathematicians), you had better know about nets. If you want to do topological algebra and/or logic (or read works by European mathematicians), you had better know about filters.



There are (at least) three axiomatizations of the concept of uniform space: (i) entourages, (ii) uniform covers, (iii) families of pseudometrics. One could develop the full theory using just one, but at various points all three have their advantages. Is there anyone who doesn't wish that there were just one definition that would work equally well in all cases?

impact - Can Mercury hit Earth or Mars in the next 5 billion years?

Yes, after a close encounter with other planets, almost anything can happen, including a split of Mercury into smaller bodies by tidal forces, or capturing as one or more moons of one or more planets, or a sequence of captures by, and escapes from planets, conversion to a ring of all or some of the fragments, collisions with other moons, fragments falling into the Sun, others ejected from the solar system.

soft question - Examples of mathematics motivated by technological considerations

One of the reasons we don't all speak German today is the combination of elementary group theory, and universal Turing machines.



Per a request below, I'll elaborate: The British, French and Poles had a joint intelligence effort against Germany during the 30s (at that time breaking cyphers was considered a liberal-arts high classes hobby). The French, using traditional cloak and dagger techniques, managed to get their hands on the German cypher machine (Enigma) plans - but did not know what to do with it. The Polish cypher bureau however had an amazing idea - they'll let math students play with it. From a group theoretic point of view, the machine worked by taking products in S_{letters}. However, the machine designers did not notice that the products were all of the form A^{-1} R A - (where A is constantly changing but R is not) which enabled the poles to discover the cycle structure of R. You can look here for more details.



When the Germans invaded Poland, the Poles let the French and the British on the secret, and the British took over the operation. The Poles used machines (bombes) from the very start to break the codes, but as the Germans improved the coding machines, the need arose the reconfigure the breaking machines as they run. Luckily for everyone involved, the head cryptanalysis on the British side was Turing, who built such a machine: Colossus.



Finally, a funny bit which I hear in Bletchley park last year: "everybody" knows that at the end of the war, Menzies and Churchill ordered the Collosi destroyed and the plans burnt, and yet they were rebuilt a few years after they were declassified - how come ?
The easy part is that some of the engineers saw it coming and took backup plans home. However, it is much funnier to discover that the Royal post office continued to manufacture the parts until the 70s, and they were still in stock.