Wednesday, 31 October 2012

homotopy theory - A question about fibrations of simplicial sets and their fibers

I've got to agree with you; it's not the best-written proof in HTT. Let me go through it glacially slowly (for my own sake!) to see if I can write something that will help clarify the role of $X$.



Let me write $Map(U,V)$ instead of $V^U$. I find it easier to parse on the internet.



First, what is our $X$? It's the simplicial set of sections of $Stimes_TDelta^1toDelta^1$, that is, it is the fiber product



$Map(Delta^1,Stimes_TDelta^1)times_{Map(Delta^1,Delta^1)}Delta^0$, where $Delta^0$ maps in by inculsion of $id$. So (since $Map(Delta^1,-)$ is a right adjoint) we get:
$$X=Map(Delta^1,Stimes_TDelta^1)times_{Map(Delta^1,Delta^1)}Delta^0=Map(Delta^1,S)times_{Map(Delta^1,T)}Map(Delta^1,Delta^1)times_{Map(Delta^1,Delta^1)}Delta^0$$
$$=Map(Delta^1,S)times_{Map(Delta^1,T)}Delta^0$$
where $Delta^0$ is mapping in by inclusion $f$.



Now we've got our map
$$q:Map(Delta^1,S)to Map({1},S)times_{Map({1},T)}Map(Delta^1,T),$$
whose fibers over $f$ we seek. That is, we just include
$$S_{t'}=Map({1},S)times_{Map({1},T)}Delta^0to Map({1},S)times_{Map({1},T)}Map(Delta^1,T)$$
(which is itself the pullback of $f:Delta^0to Map(Delta^1,T)$ along the projection, of course) and we pull back to get $q'$.



So the result is a pullback of a pullback. (If I knew how, I'd draw the two pullback squares here.) The composite pullback is the pullback of $f:Delta^0to Map(Delta^1,T)$ along $Map(Delta^1,S)to Map(Delta^1,T)$. But this is what we called $X$. So our result is a map $q':Xto S_{t'}$, and its fiber over any vertex of $S_{t'}$ must coincide with the fiber over the corresponding vertex of $Map({1},S)times_{Map({1},T)}Map(Delta^1,T)$, since that will be a pullback of a pullback as well.



Edit (Harry): I typed up the final version of the diagram. If the letters aren't explained, you can deduce what they are just by either looking at Clark's argument or just tracing the pullbacks. Every square is a pullback, so everything is very easy to deal with.
$$ matrix{
X&cong&S^{Delta^1}_f &to &Y^{Delta^1}&to& S^{Delta^1}&
cr &searrow&downarrow &Pb &downarrow&Pb&downarrow
cr L_f&cong &S_{t'} & to &L'&to& L & to & S^{{1}} &
cr &&downarrow &Pb&downarrow&Pb&downarrow&Pb&downarrow p
cr &&Delta^{0} & to &(Delta^1)^{Delta^1} &to& T^{Delta^1} & to & T^{{1}}
cr &&&id&&(f)^{Delta^1}&&d_1} $$

What are these faint streaks on the moon?

i would rather call the streaks you are referring to as ridges.



the surface of the moon is ofcourse not a perfect sphere and the same way the earth had ups and downs, there are ridges on the moon too.



you say that most of them are diagonal - top-left to bottom-right but on the right hand corner of the image, there are ones vertical ones. So, there is no specific orientation for these "streaks".



on the other hand, if there were streaks around a crater such that concave part of the streaks point towards the crater, they are due to shock waves travelling outwards from the point of impact.

Tuesday, 30 October 2012

st.statistics - Linear Regression Confidence Interval

Just to recall some basic stuff: Suppose $Y_i = alpha + beta x_i + varepsilon_i$ for $i=1,dots,n$ and $varepsilon_i sim N(0,sigma^2)$ and these are independent. We will observe the $x$'s and $y$'s but not $alpha$, $beta$, or $varepsilon_i$. The $Y$'s are random only because the $varepsilon$'s are random. These are weaker assumptions than that we have $(x,y)$ pairs that are jointly normally distributed.



Let
$$
X = begin{bmatrix}
1 & x_1 \ vdots & vdots \ 1 & x_n
end{bmatrix}
$$
be the "design matrix" (so called because if the experimenter can choose the $x$ values, then this is how the experiment is designed).



Then the least-squares estimates of $alpha$ and $beta$ are given by
$$
begin{bmatrix} hatalpha \ hatbeta end{bmatrix} = (X^T X)^{-1}X^T Y
$$
and therefore the probability distribution of the least-squares estimators is given by
$$
begin{bmatrix} hatalpha \ hatbeta end{bmatrix} sim Nleft( begin{bmatrix} alpha \ beta end{bmatrix}, sigma^2 (X^T X)^{-1} right)
$$
(the variance is thus of course a $2times 2$ positive-definite matrix). The predicted $y$-value for a given $x$ value is therefore $hatalpha + hatbeta x$, and this therefore has a probability distribution given by
$$
hat y = hatalpha + hatbeta x = begin{bmatrix}1, & xend{bmatrix} begin{bmatrix} hatalpha \ hatbeta end{bmatrix} sim Nleft( begin{bmatrix}1, & xend{bmatrix} begin{bmatrix} alpha \ beta end{bmatrix}, sigma^2 begin{bmatrix} 1, & x end{bmatrix} (X^T X)^{-1} begin{bmatrix} 1 \ x end{bmatrix} right)
$$
$$
= Nleft( alpha + beta x, frac{sigma^2}{n}frac{sum_i (x_i - x)^2}{sum_i (x_i - overline{x})^2} right).
$$
(OK, check my algebra here; it's trivial but laborious.)



There's a simple geometric intuition behind the dependence of the variance on $x$ and in particular the fact that the variance is smallest when $x=overline{x}$, so think about that too.



Now $sigma^2$ must be estimated based on the data. The errors $y_i - (alpha + beta x_i)$ are unobservable but but the residuals $y_i - (hatalpha + hatbeta x_i)$ (i.e. the estimated errors) are the components of the random vector
$$
hatvarepsilon = (I - H)Y = (I - X(X^T X)^{-1} X^T)Y.
$$
("H" stands for "hat", for reasons that should be apparent.) It is easy to see that the $ntimes n$ hat matirx $H = X(X^T X)^{-1} X^T$ is the matrix of the orthogonal projection of rank $2$ onto the 2-dimensional column space of $X$. And $I - H$ is the rank-$(n-2)$ projection onto the orthogonal complement of that space. Diagonalized, this matrix just has $n-2$ instances of 1 on the diagonal and 0 in the other two positions. Therefore
$$
frac{hatsigma^2}{sigma^2} = frac{| hatvarepsilon |^2}{sigma^2}
$$
is distributed like a sum of squares of $n - 2$ independent $N(0,1)$ random variables. It therefore has a chi-square distribution with $n-2$ degrees of freedom.



Finally, we need this: $hatvarepsilon$ and $begin{bmatrix}hatalpha \ hatbeta end{bmatrix}$ are probabilistically independent. This is true because both are linear transformations of the same vector of independent identically distributed normal random variables and their covariance vanishes:
$$
operatorname{cov}left(hatvarepsilon, begin{bmatrix}hatalpha \ hatbeta end{bmatrix} right) = operatorname{cov}left( (I - H)Y , (X^T X)^{-1}X^ T right)
$$
$$
= (I - H) operatorname{cov}(Y,Y) X(X^T X)^{-1} = sigma^2 (I - H) X(X^T X)^{-1}$
$$
and this is the $ntimes 2$ zero matrix, by definition of $H$.



Now all our lemmas are in place and we can draw some conclusions:



Firstly
$$
frac{hat y - (alpha + beta x)}{sqrt{ frac{sigma^2}{n}frac{sum_i (x_i - x)^2}{sum_i (x_i - overline{x})^2} }} sim N(0,1).
$$



Hence if $sigma$ were miraculously known, we could say that
$$
hat y pm A sqrt{ frac{sigma^2}{n}frac{sum_i (x_i - x)^2}{sum_i (x_i - overline{x})^2} }
$$
are the endpoints of a 90% confidence interval for $alpha + beta x$ if $pm A$ are the endpoints of the interval above which lies 90% of the area under the bell-curve.



But $sigma$ is not known. Since $hatsigma^2$ is indpendent of the random variable in the numerator and has a chi-square distribution with $n-2$ degrees of freedom, we can put $hatsigma$ in place of $sigma$ and instead of the normal distribution use the Student's t-distribution with $n-2$ degrees of freedom.



That's the conventional frequentist confidence interval.



For the prediction interval, just remember that the new value of $Y$ is independent of those we used above, so the variance of the difference between that and the predicted value is $sigma^2$ plus the variance of the predicted value, found above.

co.combinatorics - How to compute the rook polynomial of a Ferrers board?

Given a Ferrers board of shape $(b_1,ldots,b_m)$, we define $r_k$ as number of ways to place $k$ non-attacking rooks (as in Chess). In section 2.4 of Stanley's Enumerative Combinatorics (vol. 1) it's shown the identity:
$$sum_k r_k (x)_{m-k} = prod_i (x+s_i)$$
where $s_i = b_i-i+1$, but I don't know if I can invert this formula or make an efficient algorithm to compute the $r_k$'s.



If this isn't possible, I would be satisfied if I can compute them efficiently in the following shapes:



$(2,2,4,4,ldots,2n-2,2n-2,2n)$



$(2,2,4,4,ldots,2n,2n)$



$(1,1,3,3,ldots,2n-1,2n-1,2n+1)$

Monday, 29 October 2012

nt.number theory - optimizing Frobenius instance solutions

I was reading some fascicles of Knuth's volume 4 on combinatorial enumeration. It seems that he was using Binary Decision Diagrams (BDDs) and variations to calculate quickly similar numbers. Perhaps this will help.



Also, your description suggests to me that solutions will exist, that they can be found greedily, and that optimizing them may gain very little for the effort involved. For example, to get a representation involving many of the stamp values, I would start by forming partial sums of the stamp values in increasing order until I came close to the target or ran out of values. Then I would use standard methods to represent the difference, and voila, I have made the target using the most stamp values possible. If you want a better answer than this, provide a couple of explicit examples to communicate the feel of the problem (e.g. "using stamps in prime values from 23 to 199 cents, make 2010 cents in postage without using more than two of any single denomination, while using as few of the stamps over 100 cents as possible") .



There also were some recent papers in arxiv.org that talked about algorithms used
to solve the postage stamp problem with large stamp values. It may be that some dual exists where you can use large numbers of small stamp values to accomplish your goal.



I hope this "soft answer" jogs some neurons into something that will work for you.



Gerhard "Ask Me About System Design" Paseman, 2010.01.15

Sunday, 28 October 2012

the sun - Can the Sun become a big ball of gold atoms?

Carl Sagan answers this in the Cosmos series. Here is an excerpt.




The matter in the known Universe is made up of 74% Hydrogen. In most
of the stars we see, hydrogen nuclei are being jammed together to form
helium nuclei. Every time a nucleus of helium is made, a photon of
light is generated. This is why the stars shine. Helium accounts for
24% of matter in known Universe. In fact, helium was detected on the
sun before it was ever found on the Earth. These two elements have
accounted for 98% of matter. Might the other chemical elements have
somehow evolved from hydrogen and helium?



Three units, put together in different patterns make, essentially,
everything – “Neutron, Proton and Electron”. If you’re an atom and you
have just one proton you’re hydrogen. Two protons, helium. And so on.
Protons have positive electrical charges. But since like charges repel
each other, why does the nucleus hold together? Why don’t the
electrical repulsion of the protons make the nucleus fly to pieces?
Because there’s another force in nature. Not electricity, not gravity.
The nuclear force!! We can think of it as short-range hooks which
start working when protons or neutrons are brought very close
together.The nuclear force can overcome the electrical repulsion of
the protons.



A lump of two protons and two neutrons is the nucleus of a helium atom
and is very stable. Three helium nuclei, stuck together by nuclear
forces makes carbon. Four helium nuclei makes oxygen. There’s no
difference between four helium nuclei stuck together by nuclear forces
and the oxygen nucleus. They’re the same thing.



How easy is that to fuse nuclei? To avoid the
electrical repulsion protons and neutrons must be brought very close
together so the hooks which represent nuclear forces are engaged. This
happens only at very high temperatures, where particles move so fast
that there’s no time for electrical repulsion to act. Temperatures of
tens of millions of degrees. Such high temperatures are common in
nature. Where? In the insides of the stars. Atoms are made in the
insides of stars.



It is possible to make atoms with up to 26 Protons [Iron] in a Star.
Above that, We need a supernova to create atoms with 30 protons, 40
protons, 50 protons or even 60 protons. Nature prefers ‘even’ numbers
for stability. But ‘Gold’ is an odd-numbered atom. It has 79 Protons.
It needs more than a super nova for its creation. It needs 2 Neutron
stars to collide directly. This collision alone can favor the creation
of stable atoms of Gold.



Since neutron star collisions are also suggested as the origin of
short duration gamma-ray bursts, it is possible that you already own a
souvenir from one of the most powerful explosions in the universe.



Except for hydrogen and helium every atom in the sun and the Earth was
synthesized in other stars. The silicon in the rocks, the oxygen in
the air, the carbon in our DNA, the gold in our banks, the uranium in
our arsenals were all made thousands of light-years away and billions
of years ago. Our planet, our society and we ourselves are built of
“star stuff“.


How do we know the big bang expanded space and not the other way around?

Whenever we hear an explanation about the big bang, it is always phrased in such a way that it was an explosion which used some kind of pressure to expanded the universe out.



I wonder, however, would it be possible that our universe bubble started as a point that wrapped around the singularity and contained it. Then, for some reason, this bubble started to expand, which caused the energy in the singularity to fill the newly created empty space.



Could it be that this bubble, which is now our universe, is still expanding, and the universe is still trying to fill the space, which is why we see dark energy?



Could we even know, or would either way look exactly the same to us?

Saturday, 27 October 2012

cosmology - Beta profile fit of Virgo cluster gas?

I'm looking for the parameters $r_c$, $rho_0$ and $beta$ in a $beta$-profile fit of the Virgo cluster's ICM density. I just can't find a reference for it. Unlike for Coma (Abell 1656) where it's in Mohr et. al. 1999.



Beta profile:



$$rho(r) = rho_0 left[ 1+ left(frac{r}{r_c}right)^2 right]^{-frac{3beta}{2}}$$

cv.complex variables - Bessel function in polar coordinates

I want to write the Bessel function of the first kind in polar coordinates



$J_alpha(z)=|J_alpha(z)|e^{ivarphi_alpha(z)}$



Is anything known about $varphi_alpha(z)$?



In particular, I'm interested in proving that



$varphi_alpha(z)neq varphi_{alpha+1}(z)$



for all non-real $x$, where equalty is taken modulo $pi$.



Thank you,



Afonso

Thursday, 25 October 2012

ag.algebraic geometry - What's up with the Grothendieck circle?

I was just going to check something in FGA and I didn't have access to my pdf-copy, so I did what I normally do when in such a circumstance: surf to the Grothendieck circle's webpage.



And what did I find there? All mathematical texts written by Grothendieck himself was removed "per his request".



Does anyone know what this is about and/or what's going on here?

Wednesday, 24 October 2012

nt.number theory - Generalized quadratic Gauss sums

I was wondering whether anyone knows how to approach the following two
generalizations of the quadratic Gauss sum:



Given integers r,s with gcd(r,s)=1 and integers a,b,N



$F(r,s,N,a,b) = sum_{w = 0}^{rsa}(-1)^{b w}(sinfrac{pi w}{s}) exp(pi i w^2frac{N}{2 rs}) $



$G(r,s,N,a,b) = sum_{w = 0}^{rsa}(-1)^{b w}(sinfrac{pi w}{r})(sinfrac{pi w}{s}) exp(pi i w^2frac{N}{2 rs}) $



Note that removing the sine terms and the sign, setting a = 2, N = 4, r = 1 and s = prime gives the classical quadratic Gauss sum.



Some experimentation suggests that



$F(r,s,N,a,b) = 0$ for all integers b,N, r,s if a is even and (r,s) =1 and



$G(r,s,N,a,b) = 0$ for all a,b,N and r,s with (r,s) =1



Is there a good reason for these sums to vanish? Or a clean proof/reference?



Is it possible to evaluate F in the case a = 1? It seems to be non-zero then.



I tried reducing to the original Gauss sum by completing the square but
this seems to get quite ugly.



More generally, do such Gauss-like sums have a more natural generalization
that turns up somewhere?



Thanks

nt.number theory - weight 4 eigenforms with rational coefficients---is it reasonable to expect they all come from Calabi-Yaus?

I am not that sure that your question can be answered positively, but the following is merely speculation, so you should not pin me down on it. The basic idea is that there might be "more" weight 4 forms than rigid CY 3-folds.



It seems that (but Kevin probably knows this better than me) that it is still an open question whether there are finitely or infinitely many weight 4 eigenforms up to twisting.



Suppose there were infinitely many weight 4 eigenforms up to twisting. To realize every weight 4 eigenform we then need infinitely many $overline{mathbb{Q}}$-isomorphism classes of rigid CY 3-folds defined over $mathbb{Q}$. All hodge numbers of a rigid CY 3-fold are a priori fixed, except for $h^{1,1}$ and $h^{2,2}$, which coincide. The Euler characteristic of a rigid CY 3-fold is $2h^{1,1}$ and rigid CY 3-folds do not admit deformation, hence in order to realize every weight 4-form we either find a Hodge diamond $D$ such that



{ $X | X$ smooth projective complex variety with Hodge diamand $D$}/deformations



is infinite, or the absolute value of the Euler characteristic of a CY 3-folds can be arbitrarily large. The first conclusion would be quite remarkable, the second would solve an open problem (as far as I know).



Suppose now there were only finitely many weight 4 eigenforms up to twisting.
If you want to avoid the above mentioned problems you need that every eigenform $f$ is realized by a CY 3-fold $Y_f$ admitting an involution, so that a twist of $f$ is realized by a twist of $Y_f$.



Still it is not clear whether there are enough rigid CYs to realize every eigenform. Some computational evidence can be found in the book of Christian Meyer, Modular Calabi-Yau Threefolds. He realizes close to 100 eigenforms (up to twisting). The corresponding list at http://www.fields.utoronto.ca/publications/supplements/weight4.pdf
contains much more forms. The smallest level that he could not realize is 7.



If you allow $h^{2,1}$ to be nonzero, i.e., you allow the motive of the form to be a factor of $H^3$ or if you are happy to work with quasi-projective varieties $Y$ such that its completions is a CY 3-fold then you are in a much better position.

Tuesday, 23 October 2012

ag.algebraic geometry - Adjunction for underlying reduced subschemes

Dear Bryden,



Hopefully I have things straight, and there is a general formula $i^!omega_X = omega_{X_{red}}$. One then has the functorial isomorphism (of sheaves on $X$)
$RHom_{mathcal O_{X_{red}}}({mathcal F},omega_{X_{red}}) = Rhom_{mathcal O_X}(i_*{mathcal F}, omega_X),$
for a coherent sheaf $mathcal F$ on $X_{red}$. (Normally we would have to apply
an $Ri_*$ to the source of this isomorphism, to put the RHom sheaves on the
same space, and would have to have an $Ri_*$ in the formula on the RHS. But
$i_*$ is exact, and in fact just identifies sheaves on $X_{red}$ with sheaves
on $X$ via the identification of their underlying topological spaces.



Now $RHom_{mathcal O_X}(i_*{mathcal F},omega_X) = Hom_{mathcal O_X}(i_*{mathcal F}, {mathcal J}^{bullet})$,
where ${mathcal J}^{bullet}$ is an injective resolution of $omega_X$, which
in turn equals $Hom_{mathcal O_{X_{red}}}(mathcal F,{mathcal J}^{bullet}[mathcal I]),$
where $mathcal I$ is the ideal sheaf of $X_{red}$ in $X$.



Finally, this last complex can be identified with
$RHom_{mathcal O_{X_{red}}}(mathcal F, RHom_{mathcal O_X}(mathcal O_{X_{red}},
omega_X)).$



So we get the formula
$omega_{X_{red}} = RHom_{mathcal O_X}(O_{X_{red}}, omega_X).$
(And the derivation shows that this should be valid for any closed immersion,
provided one is in a context where the dualizing complex formalism is satisfied,
except that probably there should be some shifts in dimension in general,
because the dualizing complex probably coincides with the dualizing sheaf place
not in degree 0, but in degree $-dim X$. However, in our case the dimensions
of $X$ and $X_{red}$ coincide, so this shift can be ignored.)



Note that, as this formula shows, $omega_{X_{red}}$ could be a complex, not
just a sheaf. This is reasonable, I guess; in general, even if $X$ is CM,
this needn't imply that $X_{red}$ is (I imagine).



If in fact $X_{red}$ is CM, then I guess we find just one non-zero term in the formula
for $omega_{X_{red}},$ and so have $omega_{X_{red}} = omega_X[mathcal I].$



With a bit of luck, the above is not bogus, and answers your question.

Monday, 22 October 2012

lo.logic - Contradiction to axiom of foundation

Consider the usual language and axioms of ZF. Now add constants $x_1, x_2, dots$ to the language together with the axioms $x_2in x_1, x_3in x_2, dots$ to form a new theory. Then by the compactness theorem, since every finite subset of the axioms has a model, the new theory has a model. But doesn't the set {$x_1, x_2, dots$} have no $in$-minimal element, contradicting the axiom of foundation?



I'm thinking that maybe {$x_1, x_2, dots$} is not necessarily a set in the model, but isn't it by replacement? Maybe not, since we don't necessarily have a copy of $mathbb{N}$ in our model... Could someone clarify this please?

ct.category theory - Concrete category

A concrete category is a category C together with a function that assigns to each object A of C a set called the underlying set of A.
Example: The category of groups, equipped with the function that assigns to each group its underlying set in the usual sense, is a concrete category.
What is the underlying set for an object in a category of groups ?

cosmology - Are we moving ever closer to the center of our Galaxy due to a super massive black hole?

There is an incorrect assumption in this question. There is no "super massive black hole which holds the galaxy together". There is just a super massive black hole.



It is often believed that black holes have a huge gravitational force. Actually, they simply have the gravitational force of their mass. A black hole of 50 solar masses has no more "gravitational power" than a star of 50 solar masses. And if the sun would instantly change into a black hole (of equal mass), the Earth trajectory would not change at all (but yes, you would have to turn on the heater!).



Now, the question is: considering that the black hole at the center of our galaxy is super massive, does it plays an important role for stars like the Sun, at our distance? To answer this, you simply have to ask yourself if the mass of the black hole is important or negligible in front of the mass of the other components that are inside the orbit of our Sun.



The mass of the black hole at the center of our galaxy is about 10^6 (1 million) solar masses (from this paper: http://arxiv.org/abs/0810.4674). The total mass of the Galaxy is about 10^12 (a thousand billion) solar masses (http://arxiv.org/abs/1102.4340). The mass inside the radius of the Sun is way lower (you could calculate it from the speed of the Sun and its distance from the center), but still, the mass of the black hole is very small compared to the rest of the mass inside the Sun's radius. So for the effect of the black hole to become dominant, you would need to be very close from it.



What would happen if instead of this super massive black hole there would be many stars, totalizing the mass of the black hole? Nothing. If would not change anything on the Sun trajectory.



To change the Sun's trajectory and make the Sun "fall", what you would need would be:



  1. More "pulling" mass, but there is no reason this mass suddenly appears.

  2. Something to decrease the speed of the Sun around its orbit. It is probably the case as the Sun is slowed down by the gas of the interstellar medium, but this effect is very small.

Periodic error correction in automatic telescopes

Quoting the most relevant parts from the Richard McDonald's Wiki on Astrophotography Mounts: Periodic Error Correction:




Periodic Error can be reduced to an acceptable level using a variety
of techniques, only some of which are in the range of a beginner or
mid-level astrophotographer.



  • Throw Money at the Problem

    Very high-end mounts for astrophotography have very small periodic error because of the time
    and money spent on manufacturing high-precision gears. You can also
    buy higher-precision gear upgrades for some mounts. For the beginner,
    let's call this impractical. Buying a $1000 replacement worm gear for
    your $1000 mount is probably not a good balance.


  • Don't Use Gears

    This is really an extreme case of the "throw money" solution, mentioned just for fun. There are some
    very-high-end experimental drive systems showing up on the market that
    don't use worm gears, and that don't exhibit periodic error. Examples
    include direct drive systems and harmonic drives. 'Way out of our
    price range.


  • Autoguiding

    A second guide camera and computer can be used to make frequent small corrections to the mount's pointing,
    and this can reduce or eliminate Periodic Error if the onset of the
    error is not too sudden. This is the subject of a separate article.


  • Periodic Error Correction Feature

    Training PEC with hand-control Most mounts intended for astrophotography include a
    feature called Periodic Error Correction (PEC), which can be used
    alone, or in conjunction with Autoguiding. PEC is used in two phases:



    1. Training. During this phase you use the control panel or menus to say "Hey! Pay attention to this!" to the mount, then you manually
      keep a star perfectly centred for several worm periods. You do this by
      centering a star with a high-magnification eyepiece that includes a
      cross-hair reticle. You then stare at the star for 10 to 15 minutes
      and use the mount's hand controller to manually make the small
      adjustments necessary to keep it perfectly centred on the cross hair.
      The mount records the error corrections you supply, remembering where
      in the worm position each one was needed. By training through more
      than one worm cycle, the mount can record an average correction, in
      case you reacted slowly or over-reacted.

    2. Playback. Once you have trained the mount, you can turn on PEC. The mount will "play back" the recorded error correction information
      by slightly changing the speed of the Right Ascension drive to move
      ahead or back each time your manual error corrections did the same.
      This will cancel the periodic error, resulting in a smoother track.


enter image description here



Training PEC with
hand-control: Periodic Error Correction (PEC) button on the control
panel of an equatorial mount.



The manual training phase is quite tedious and a more modern
alternative is to use a camera and computer - usually the same one you
will use for auto-guiding - to track a star while the mount records
the training information.



There is even specialized software available to help collect Periodic
Error data, analyze and smooth it, and upload it to the mount. You do
not need such software, as the manual techniques mentioned above will
work just fine. However, it makes a tedious job simple and pleasant,
and you may find it a worthwhile investment. I use PEMPRO and am
very impressed with it. It's not free, but it is inexpensive and works
very well (and includes another feature to help achieve perfect polar
alignment).




Quoted text and photograph source, copyright and courtesy of Richard McDonald, no copyright infringement intended. I would also highly recommend reading same author's extensive article on Autoguiding.

Sunday, 21 October 2012

fundamental astronomy - How to plot orbit of binary star and calculate its orbital elements?

If it's a binary, it's fairly simple (in comparison with a ternary system), because the stars of the binary orbit about their common barycenter in Kepler ellipses.



An orbit simulator, see here; replace the central star by the barycenter of the binaries.



Calculation of an Orbit from Three Observations. Kepler problem on Wikipedia.



If you don't find a ready-to-use software, try to solve it numerically by simulating the orbit of the binaries according to Kepler's law, and vary mass, distance, excentricity assumptions until they match to the observations. Use optimization methods, e.g. hill climbing algorithms, or gradient methods.

space time - World line coordinate finiteness

If you trace two particles' world line backwards in time, according to current theory, both objects should converge at big bang.



Would both objects arrive there simultaneously?



Another way of asking the question:



Is there any evidence that supports the idea that new matter is "introduced" into the universe as a continuous stream, as opposed to everything coming from a singularity both in time and space?



The analogy would be from the inside of a black hole. For each particle in that singularity, they would all have a world line that starts at the same spacial coordinate, but the particles would not be "introduced" into the black hole simultaneously. You can argue that time stops having meaning in a black hole, but that's only true for an "outer observer" in my reasoning.

astrophotography - What does the BinTableHDU store?

I'm just getting started with very primitive analysis of FITS data, and I have a 'raw' FITS file which I don't know how to get different wavebands of data from (if this is even possible).



Filename: casa_raw.fits
No. Name Type Cards Dimensions Format
0 PRIMARY PrimaryHDU 33 ()
1 EVENTS BinTableHDU 793 14621280R x 19C [1D, 1I, 1I, 1J, 1I, 1I, 1I, 1I, 1E, 1E, 1E, 1E, 1J, 1J, 1E, 1J, 1I, 1I, 32X]
2 GTI BinTableHDU 28 2R x 2C [1D, 1D]



Could anyone explain what exactly this information conveys?

Saturday, 20 October 2012

textbook recommendation - Undergraduate Differential Geometry Texts

I've reviewed a few books online for the MAA. When I learned undergraduate differential geometry with John Terrilla, we used O'Neill and Do Carmo and both are very good indeed. O'Neill is a bit more complete, but be warned - the use of differential forms can be a little unnerving to undergraduates. That being said, there's probably no gentler place to learn about them. I do think it's important to study a modern version of classical DG first (i.e. curves and surfaces in R3, emphazing vector space properties) before going anywhere near forms or manifolds - linear algebra should be automatic for any student learning differential geometry at any level.



Of the textbooks mentioned here:



  • I love Millman and Parker as well, although it's not as complete as one would like. I'd love to see Dover put out a nice cheap paperback of it. Thorpe is OK, but doesn't excite me; his notation gets unnecessarily dense. That being said, he does emphasize linear algebra aspects and covers quite a few topics not found in the other texts.


  • Gray's mammoth tome is probably the single most complete source on classical DG: everything is very clearly done with lots of fascinating computer drawn images and historical asides. But the incomprehensibly inserted program code is really distracting and breaks the flow and organization of the text - it should be relegated to software or online. For that reason, I can't really recommend it as a class text, but it definitely should be kept on reserve when teaching such a course.


  • Spivak and Frankel, although both wonderful texts, are really graduate level.


Lastly, there are lots of free online resources for students now - the aforementioned lecture notes by Shifrin are outstanding, and we should enjoy them as long he makes them freely available before converting them to a real book. (Really looking forward to the finished product in a few years,though...)

solar system - Are any Pluto-sized objects remaining to be discovered in the Kuiper Belt?

This is a part answer to your question, as it is difficult to answer without speculating, so here are some facts/observations related to your question.



Asides from Pluto/Charon, Eris, Triton (could be a captured Kuiper Belt object), Makemake and the football shaped Haumea, most of the Kuiper Belt Objects (KBOs) are according to the article "Kuiper Belt Objects: Facts about the Kuiper Belt & KBOs" (Redd, 2012):




thousands of bodies more than 62 miles (100 km) in diameter travel around the sun within this belt, along with trillions of smaller objects, many of which are short-period comets




and is believed to have a total mass of only a tenth of the Earth, according to the article "Forming the Kuiper Belt by
the Outward Transport of Objects
During Neptune's Migration"
(Levison and Morbidelli).



Here is a list Of the many Transneptunian Objects that have been documented, detailing their absolute magnitudes.



In regards to one of your main questions - according to Redd (2012), the challenge in their detection is




Because of their small size and distant location, Kuiper Belt Objects are a challenge to spot from Earth. Infrared measurements from NASA's space-based telescope, Spitzer, have helped to nail down sizes for the largest objects.




I would add, their irregular elliptical orbits* and extreme (compared to the major planets) inclination to ecliptic make it that much more difficult to detect them. Additionally, according to "The Edge of the Solar System" website, further difficulties include low surface reflectivity.



  • An example of a possible KBO with an extremely elliptical orbit is Sedna, which is believed to take over 10,000 years to orbit the sun; is smaller than Pluto, but was observed at about 90AU (3 times further than Pluto).

So, there could be many small Pluto-sized 'dark' worlds in highly elliptical irregular orbits in the Kuiper Belt and beyond. However, beyond those listed, we have not seen that many and the total mass theorised does not support the idea of too many in existence, but that does not mean that they are not out there.

pole - Why does the Moon never set in Svalbard, Norway?

You must have misheard it, or the documentary you watched wasn't presenting very precise information. It does set but it also stays on the night sky for several days during the polar winter (polar night) when the Moon if full. This is relatively simple to imagine, so I'll describe it;



So what's happening is that the Earth's axial tilt during the polar winters leans the whole Northern hemisphere towards the night side, away from the Sun. This tilt is big enough (~ 23.4°) that the night sky objects aligned with the Earth's equatorial plane stay visible relatively low on the horizon. With those regions being either relatively flat and/or with a view towards the sea, there's not many obstructions limiting the viewing angle, so the Moon (and analogous also the Sun during polar summers) stays "locked" low above the horizon. To help a bit with imagining this, here's an animation of the Earth's axial tilt, courtesy of Wikipedia:



                                                   enter image description here



If we imagine this animation of the Earth with the Sun in the distant left of the image, so during Northern hemisphere's winter (winter solstice to be precise), and the Moon to the distant right of the image (roughly 25 widths of the image away), so when it's either full or close to this lunar phase, it's not too difficult to appreciate that the northernmost polar regions have a direct line of sight of the Moon during Earth's full rotation on its axis, or a day. If you keep in mind that other celestials, including the Moon, are oblivious to the Earth's axial tilt (well, not exactly, but let's not nitpick about tidal effects that might take millions of years to make a difference), as the Moon moves farther in its orbit, in our case towards the viewer, this observation angle decreases further still and those northernmost latitudes hide to us for some part of the day. At lunar last quarter, it would be directly towards us relative to the image, so this direct line of sight relationship between the Earth and the Moon becomes reciprocal to how we're seeing places on the Earth on the animation.



Why when the Moon is full? Simply because that's when the Moon is also behind the Earth (but not in its shadow), so the relative angle between the observation point and the Moon would stay high enough to observe it. As it moves in lunar phase and in orbit around the Earth farther, this angle becomes lower and the Moon indeed does set also in the arctic region. For what is worth, this goes exactly the same for South pole, only with a half a year difference.



One other effect that plays a role here is the Earth's atmospheric refraction which also adds to the duration during which the Moon appears not to set. Meaning, that even when the Moon wouldn't be in direct line of sight, but only marginally so, it would still appear low on the skies due to optical effect (displacement) of the atmosphere. This effect would somewhat offset observing the Moon from lowlands with possibly shallower observation angle when compared to higher altitude observation points with less direct line of sight obstructions, due to denser atmosphere and thus higher refraction index.

Thursday, 18 October 2012

gr.group theory - Elements of infinite order in a profinite group

I would like to add that the restricted Burnside problem is equivalent to the fact that a finitely generated profinite group of finite exponent is finite. Now, this was of course proved by Zelmanov. But he also proved a stronger result: every finitely generated compact Hausorff torsion group is finite, see Zelʹmanov, E. I., On periodic compact groups, Israel J. Math. 77 (1992), no. 1-2, 83--95. In particular, every finitely generated torsion profinite group is finite, i.e. the Burnside problem is true for profinite groups.
BTW, Zelmanov has a more general result regarding when a pro-p group is finite (I don't remember now the exact formulation, Ignore: it should be something like all generators have finite order and the associated graded Lie algbera satisfies an identity), however, he only published a sketch of the proof which I think is in E. Zelmanov, Nil Rings and Periodic Groups, The Korean Mathematical Society Lecture Notes in Mathematics, Korean Mathematical Society, Seoul, 1992.



Edit: I was asked about this recently. So I really had to search my memory and the literature. This is what I have found: I think I read about the result in Shalev’s chapter in New Horizons in Pro-$p$ Groups (Theorem 2.1 and Corollary 2.2). However, the original reference is Zelmanov’s paper in Groups ’93 Galway St. Andrews Volume 2, LMS Lecture Note Series 212.



Here it is:
Let $G$ be a group. Write $(x_1,x_2,ldots,x_i)$ for the left normalized commutator of the elements $x_1,x_2,ldots,x_i in G$. Let $D_k$ be the subgroup of $G$ generated by $(x_1,x_2,ldots,x_i)^{p^j}$, where $ip^j geq k$ and we go over all $x_1,x_2ldots,x_i in G$. Let $L_p(G)$ be the Lie subalgebra generated by $D_1/D_2$ in the Lie algebra $oplus_{i geq 1} D_i/D_{i+1}$. We say that $G$ is Infinitesimally PI (IP) if $L_p(G)$ satisfies a polynomial identity (PI). Zelmanov proved the following theorem:



Theorem: If $G$ is a finitely generated, residually-$p$, IP, and periodic group, then $G$ is finite.



The proof is based on the following theorem for which he only sketched the proof (according to Shalev):



Theorem: Let $L$ be a Lie algebra generated by $a_1,a_2,ldots,a_m$. Suppose that $L$ is PI and every commutator in $a_1,a_2,ldots,a_m$ is ad-nilpotent. Then $L$ is nilpotent.



The question whether a torsion profinite group is of finite exponent is still open as far as I know and is considerd very difficult. (Burnside type problems seem to be very difficult.)

Wednesday, 17 October 2012

meteorite - Blowing up an asteroid/comet really potentially worse?

Well there are some things to consider. Initially if you could make sure that after you blow up an asteroid you will end up with numerous but small enough pieces so that they will either: one, burn up in the atmosphere or two, be headed away from Earth (and not hitting us five years later) then we are OK, and blowing up the asteroid with a missile would be a fair solution.



The problem here lies in the fact that we know little about the internal composition of asteroids in general, and presumably even less about a particular one, so it is very hard to predict exactly where the pieces of the asteroid generated by an impact are or aren't going to end up or, be headed towards or even its size.



Another scenario could be that if you effectively smashed the asteroid into small pieces that could then burn into the atmosphere, and if those pieces were coincidentally to end up being consumed by Earth's atmosphere, it would heat up provoking presumably an unpleasant day on Earth of course depending on the mass of the object.



But there is a much better solution than that Armageddon-Hollywood inspired one. It is call gravitational tethering. There is something we know, and we know very well about asteroids, and that would be their trajectory paths or orbits. Even when a new asteroid is discovered, its orbit can be computed pretty quick and with great accuracy (because we know the solar system's gravity very well). So if an asteroid is to impact Earth, it is likely that we will know with years, probably decades in advance. And so we can just send a space vehicle (called gravity tractor), with enough mass and time in advance, and place it just beside the asteroid, hence allowing us to tilt its orbit by just a tiny amount, due to the gravitational pull between the two objects. Now when you consider the effect of that tiny amount in the long run, it effectively deflects the path of the given asteroid from that of the Earth so that it won't hit us 20 or 30 years later.



And this is something we have control over, and something we can predict with great accuracy. It is the (safe) way to go.



If you are still not happy with my answer, you can listen to Neil de Grasse Tyson himself explaining it in this 5 min video.



Also check out this talk from the American Museum of Natural History on "Defending Earth from Asteroids" LINK



Further reference here.

ct.category theory - Does F_* have a description in terms of the Grothendieck construction?

I guess it depends what you would accept as a "description in terms of the Grothendieck construction."



For each $d in D$, we have the projection $pi_d : d/F rightarrow C$. Given some $gamma : C rightarrow Set$, I can form the composite $$ gamma circ pi_d : d/F rightarrow C rightarrow Set$$



This determines a discrete opfibration $int gamma circ pi_d$ over $d/F$. Let $Gamma_{d/F}$ denote the set of sections of the projection $int gamma circ pi_d rightarrow d/F$. Then I believe you will find that $F_* gamma$ is the Grothendieck construction on the functor $d mapsto Gamma_{d/F}$.



Of course, this is not really that interesting, since it is really nothing more than the observation that the limit of a functor $F : C rightarrow Set$ can be calculated as the set of sections of the natural projection $int F rightarrow C$, together with definition of the right Kan extension.



On the other hand, it suggests (to me at least) that there is unlikely to be a more "global" description in terms of the Grothendieck construction since the object we are trying to describe is like a "union of a collection of maps," and these to operations tend not to commute (maps from a union is the product of the maps on each component). Probably you knew all this, but maybe somebody will find it useful . . .

Tuesday, 16 October 2012

orbit - The tidal locking problem concerning Earth sized planets in habitable zone around Red Dwarfs

There is quite simple formula that will give you the tidal-locking half-time



$$T = C, frac{a^6 R mu}{ M_s M_p^2}$$



  • $a$ - semi-major axis, or simply the radius of the planet circular orbital trajectory in meters

  • $R$ - planet radius in meters

  • $M_s$ - planet mass in kg

  • $M_p$ - parent planet/star mass in kg

  • $mu$ - rigidity, approximately $3×10^{10}$ for rocky objects and $4×10^9$ for icy ones.

  • $C$ - a prefactor dependent on units used. On Wikipedia, they write $C = 6 × 10^{10}$ if the time is to be in years, but this number is probably wrong. (See the discussion.)

Solving the problem of interaction between the parent star and between the planets will be difficult. Generally, if tidal-locking half-time between the planets is much shorter than the tidal-locking half-time between the star and the planet, the planets will tidal-lock to each other rather than to the star. So we ask, whether



$$ frac{A^6}{ M_s^3} ll frac{a^6}{ M_s M_p^2};,$$



where $A$ is the semi-major axis of the orbit of the double-planet. For typical planet in the habitable zone of the red dwarf star, $a$ = 9,000,000,000 m (0.06 AU), while the distance of the double-planet could be for example $A$ = 30,000,000 m. $M_s = 6times 10^{24};mathrm{kg}$ and $M_p = 2.9times 10^{29};mathrm{kg}$. We get



$$2cdot 10^{-41}ll 6.3cdot 10^{-36};,$$



which is well-satisfied and the planets will tidal lock to each other rather than to the parent star. Were the distance between the planets $A$ = 300,000,000 m, the inequality would yield



$$2cdot 10^{-35}ll 6.3cdot 10^{-36};,$$



which is not satisfied and the tidal forces of the star would probably disrupt the double-planet. (The planets would not even be in the Hill sphere, which approximately gives the stable orbits.)



So the answer is: a very close double planets can tidal-lock to each other rather than to the parent red-dwarf star, but they would have to be much closer than the Earth to the Moon.

reference request - Telling group algebras apart

It's a big, famous, hard problem in operator algebras to determine if the von Neumann algebras $L(F_2)$ and $L(F_3)$ are isomorphic, or not. Here $F_n$ is the free group on n generators and $L(F_n)$ is the weak-operator-topology closure of the group algebra $mathbb C[F_n]$ acting naturally on the Hilbert space $ell^2(F_n)$.



I presume it must be known if the algebras $mathbb C[F_2]$ and $mathbb C[F_3]$ are isomorphic or not. But from casually asking a few algebraists, I've never had any luck in finding this out (I admit to not working very hard on this!) I'm guessing some (co)homology theories must help...? What about for replacing $mathbb C$ by a more general ring?

telescope - Method to determine the amount of reflected starlight necessary for an exoplanet to be visible from a given distance/angle?

What is the method to determine the amount of reflected starlight necessary for an exoplanet to be visible from a given distance/angle? (Not from occlusion but actually visible on its own.) Further, how can you quantify this and calculate/predict/measure natural and artificial variations in the amount of reflected light?



I would like to see equations that take into account/assume the following things:



  • the properties of the detection equipment, such as telescope mirror diameter, optical resolving power, digital resolution and sensitivity (I'm a newbie to astronomy so I don't know the proper terms or units for these things; I come from the world of photography with ISO, pixel count, noise levels, etc.).


  • the properties of the exoplanet such as diameter, distance from host star, surface reflectance coefficient on a hypothetical maximally "clear day", atmospheric thickness and reflected light absorption due to gasses, angles of a triangle drawn between the host star, exoplanet, and Earth.


  • assume a space-based telescope on Earth and assume no interstellar dust of significance.


  • assume no significant warping of spacetime by intense gravitational fields (i.e. we are looking at something in our stellar neighborhood, no further than, say, 50-100 LY)


Based on my nooblet research so far this equation should result in a figure expressed in Janksy units, I think? But I should like to know the average (RMS) photons per second that come from the exoplanet into the telescope, along with the peak value, during the periods of maximum and minimum intensity (i.e. when the planet is brightest and dimmest).



Please give an example using figures for the best equipment we currently have available for this purpose and provide links if you can to the website for that project. Please in the example, just for the sake of argument, use a planet and star exactly like Earth and the Sun. If a planet like Earth would not be detectable at these parameters, assume more powerful detection equipment (just scale up the telescope's properties until it would work at a 10 LY distance and say how much you had to scale it up, and feel free to opine on how unrealistic such a device is).



Lastly, and here's the kicker: now assume an artificial object with a perfectly flat reflecting surface exists upon one side of this exoplanet. How would you incorporate the precise amount of extra light added by such a surface to the above equation as a separate part of the equation? Assume that you know the following properties of the reflecting surface in advance:



  • area of the flat reflecting surface

  • reflectance coefficient of the material used for constructing the surface

  • geometric shape (assume a contiguous primitive geometric shape like a disc or square or triangle, or disregard if this property is irrelevant. If it's not irrelevant, then why is it not? What difference would the shape make on what gets detected?)

  • angle between the reflecting surface plane and a plane tangent to the average sphere of the planet.

For an example lets assume a disc of polished white stone (0.8 reflectance) 20,000 sq. m. in size, positioned on at the part of the planet with the brightest insolation, at a 45-degree angle from the tangent plane. It's polished to a degree, but not a mirror. When the planet rotates, at some point, this surface reflects the host star's light directly towards Earth and would cause a glint of light for a small amount of time. How do we calculate the glint's duration? How would we mathematically distinguish such a glint? How could you tell if such a glint was reflected starlight as opposed to a laser pulse or other artificial light source?



What size would such an artificial reflecting surface (0.8 reflectance) need to be in order to make a planet visible with today's best telescopes from various distances up to 10 LY, where under current technology, the planet is not visible?



UPDATE:



I realized that in the case of flat reflecting surfaces then the inverse square law would not apply from the distance of the exoplanet to Earth, due to Hero's rule (aka the law of reflection aka specular reflection). That means a totally still, large, flat body of water reflecting starlight, or a highly polished surface (like that of the Great Pyramid after its construction) would reflect a signal into space from the host star that would be detectable from a much greater distance away than the diffuse, incident light being reflected off of the planetary surface (which light would be subject to the inverse square law from the point of incidence).



Variables that would come into play would be the reflectance of the surface, the absorption of the atmosphere, flatness of the surface, and the level of insolation provided by the star.



By my preliminary calculations the reflections off of the Great Pyramid would be visible from 20 LY.

Monday, 15 October 2012

ct.category theory - Colimits in the category of smooth manifolds

I'd like to recast Reid's (excellent) answer slightly. The essence of it is the following principle:




To show that a limit or colimit doesn't exist in some category, embed your category in one where limits or colimits do exist and find some diagram in the original category whose colimit in the larger category does not lie in the image of the embedding.




The point is, it's usually much easier to show that an object $X$ of $mathcal{D}$ is not an object of $mathcal{C}$ than it is to show that $mathcal{C}$ has nothing that looks like $X$. For a simpler analogy, think of the difference between proving that $(0,1)$ is not complete versus proving that $(0,1) subseteq mathbb{R}$ is not closed. The essence is the same, but the latter always seems to me to be a lot easier to grasp.



Back to the principle. As stated, it's not quite strong enough. You need a condition on the embedding:




Make sure that your embedding preserves those limits or colimits that already exist.




Again, by analogy: to prove that a metric space $X$ is not complete, we need a continuous map from $X$ to a complete space with non-closed image. An arbitrary map won't do.



Back to the case in hand. As the functor $M mapsto C^infty(M,mathbb{R})$ is a (contravariantly) representable embedding, it preserves colimits and so is suitable for the argument to go through.



However, it does not preserve limits so if you asked the corresponding question about limits, you'd need a different embedding. It turns out, though, that there is a complete and cocomplete category in which the category of manifolds embeds preserving all limits and colimits. That is the category of Hausdorff Froelicher spaces. Froelicher spaces may feel a little more topological than algebras so for those who, like myself, prefer topology to algebra, here's a recasting of Reid's answer using (Hausdorff) Froelicher spaces.



The key thing is that a Froelicher space is completely determined by either the smooth functions from it to $mathbb{R}$ or the smooth curves in it (i.e. smooth functions from $mathbb{R}$).



We take the same colimit: the pushout of




$$
begin{matrix}
{0} &to& mathbb{R}\
downarrow \
mathbb{R}
end{matrix}
$$



We shall show that it is the union of the $x$ and $y$ axes in $mathbb{R}^2$, which is clearly not a manifold.



Let us write the colimit as $X$. First, we define a smooth function $F colon X to mathbb{R}^2$. It is the obvious one: it sends the first copy of $mathbb{R}$ to the $x$-axis and the second copy to the $y$-axis. As these two functions agree on ${0}$, this is a well-defined smooth function.



We want to show that this is an initial map. One sufficient (but not necessary) condition for this is that every smooth function $f colon X to mathbb{R}$ factors through $F$.



As Reid says, a smooth function $f colon X to mathbb{R}$ consists of two smooth functions $f_1, f_2 colon mathbb{R} to mathbb{R}$ satisfying $f_1(0) = f_2(0)$. Let $g colon mathbb{R}^2 to mathbb{R}$ be the function $g(x,y) = f_1(x) + f_2(y) - f_1(0)$. This is smooth and we have $g(x,0) = f_1(x) + f_2(0) - f_1(0) = f_1(x)$ and, similarly, $g(0,y) = f_2(y)$. Thus $g circ F = f$ and so every function $X to mathbb{R}$ factors through the inclusion $X to mathbb{R}^2$. Hence the inclusion $X to mathbb{R}^2$ is initial. Thus we can identify $X$ with its image, that being the union of the two axes.



As I said, this is merely a recasting of Reid's answer. I post it partly to make it more topological in feel, but mainly to expose the general principle which Reid uses.

Could any known, living organisms on Earth survive on Mars?

No life has been discovered outside of Earth (yet?), but do we know if anything that would be considered "living" on Earth could conquer Mars? (or maybe Venus?)



With the Mars One project on the way, I was wondering if it was possible to transplant some of Earth's life to Mars, and if something could survive there (naturally).



Many kinds of life are based on air and sun. Animals have to breathe oxygen, plants need the sun, and both need water. But there are many other kinds of life on Earth, like:



  • Very deep seawater life that live under high pressure and without sunlight

  • extremophile organisms that live in very hot water

  • weird "arsenic life" which was debunked as being fully arsenic based, but turned out to be something different

Have there been any theories or studies about unusual forms of life on Earth that might survive on Mars?

Sunday, 14 October 2012

star - Why planet's orbit is not perpendicular or random ?

Short answer: conservation of angular momentum.



Long answer:



The origin of almost any planetary system is a sparse cloud. That cloud starts to contract due (typically) to a pressure wave crossing it.



The cloud fragments as it contracts, and each fragment is what we know as a pre-star cloud.



Since almost always there is some movement in the matter in each cloud, the cloud as a whole starts to rotate, very slowly. Contraction helps because, due to angular momentum conservation law, when the cloud contracts, the rotation accelerates.



Soon we get a protostar with the most contracted matter, surrounded by a protoplanetary disk composed with the less contracted matter. The rotation of the whole system is in the same plane, due to conservation of angular momentum.



The protostar becomes a star, and the protoplanetary disk becomes a bunch of planets. Each planet, in turn, orbits the star and rotates on itself, all in the same direction, based on which point of the protoplanetary disk started accreting mass.



Later on, interaction among massive bodies disturbs this shiny process by changing some orbits and by having some collisions, but that's minor.

Can the supernova remnant SN 1572 be observed by amateur astronomers?

As you say, SN 1572 is not very bright in the optical. There are some Hα regions that have been observed with world-class optical telescopes, but they do not look like the X-ray and infrared images that you normally see. In fact, images from the Palomar Optical Sky Survey 2 (with a limiting magnitude of ~22) do not reveal any nebular emission from this object at all:



POSS2 of SN 1572



Therefore, the optical emission must be fainter than 22nd magnitude. I do not think there are amateur photographs of an object this faint and I believe a CCD on an amateur telescope would saturate from noise before revealing any of the nebular emission.

Saturday, 13 October 2012

graph theory - Homology with Coefficients

Hi Tony.



This is not really a homology-question, the core of it is the fundamental group. The homomorphism you are using is used in the study of Van Kampen diagrams. Consider a presentation $G=langle A|Rrangle$. A Van Kampen diagram on $S$ is a labeled graph like you have defined it. The only difference is that in a Van Kampen diagram all labels are generators (or their inverses) $a^{pm 1}$ and not arbitrary words (although you could define it in this general way without problems because of the Van Kampen lemma).



Then every path in this graph has a group word written on it and "reading the word along a path" is a homomorphism {Paths}$to G$ with respect to composition of paths. It turns out, that this is compatible with homotopy of paths so this induces a homomorphism $pi_1(S,x_0)to G$.



This is the general version of your homomorphism: If your $G$ happens to be abelian, then this homomorphism factorizes through $pi_1(S,x_0)^{ab}$ which is $H_1(S)$ by the Hurewicz theorem.



This point of view clarifies some connections between the geometry of Van Kampen diagrams and group theoretic questions.



For example the Van Kampen lemma tells you that a group word is trivial if and only if there is a Van Kampen diagram on this disk with this word written on the boundary.



Another fact is this one: If there are no nontrivial "reduced" Van Kampen diagrams on the torus, then every two commuting elements of $G$ generate a cyclic subgroup (i.e. $xyx^{-1}y^{-1}=1$ has only the trivial solutions $x=a^k, y=a^m$ for some $ain G$.). In a similar spirit one can prove: If there are no nontrivial reduced Van Kampen diagrams on the real projective plane, then there are no involutions in $G$ (i.e. $x^2=1$ has only the trivial solution $x=1$), and if there are no nontrivial reduced Van Kampen diagrams on Klein's bottle, then the only element that is conjugated to its own inverse is the identity (i.e. $yxy^{-1}=x^{-1}$ has only the trivial solution $x=1$).



This connection between geometry and group properties becomes less obscure, if one knows the fundamental groups of the disk (1), the torus ($langle x,y | xyx^{-1}y^{-1}=1rangle$), the real projective plane ($langle x | x^2=1rangle$) and Klein's bottle ($langle x,y | yxy^{-1}=x^{-1}rangle$).

Friday, 12 October 2012

mirror symmetry - Which part of physical B model is not rigorous?

To define (as Kevin Lin does above) the B-model purely as the derived category of coherent sheaves is fine and rigorous, but it ignores the higher-genus aspects of mirror symmetry -- which was the original question. As I wrote above, Kevin Costello gives a rigorous description of the higher-genus amplitudes, but it is still conjectural whether this agrees with the physics. The issue is that higher-genus string amplitudes depend on an integration over the moduli space of Riemann surfaces (or a space of maps from them, depending on the model), and this demands compactification. The full, non-topological theory is of course an ordinary two-dimensional quantum field theory, with all the usual difficulties in making the path integral rigorous.

Thursday, 11 October 2012

Can astronomical radio sources be used as a verifiable randomness beacon?

A randomness beacon is a source of random data that is broadcast to multiple parties. Users listening to the beacon receive the same sequence of random strings and no one can predict the values in advance. This is a construct that has many applications in cryptography and distributed communications. Such random sequences have been generated using atmospheric noise, quantum measurements, and financial data (*).



Using astronomical radio sources (such as meteor bursts, solar noise, planetary observations, CMBR, etc.), is it possible for multiple independent observers spread around the globe, using relatively inexpensive equipment, to detect the same unpredictable cosmic signals, and agree on the time and properties of those signals (to within some margin of error)?



If this were feasible, you could create a publicly-verifiable randomness beacon that didn't require trusting a central authority to generate the sequence in an unbiased way. Instead, observers can certify the random values independently using their own local measurements.



(*) usenix dot org /legacy/events/evtwote10/tech/full_papers/Clark.pdf

the sun - Why do stars become red giants?

The destiny of a star basically depends upon its mass.
All its activities variety depends upon its mass.
If a star's core has a mass that is below the Chandraseckhar limit ($Msim1.4M_{sun}$), then is destined to die as a white dwarf (or, actually, as a black dwarf in the end).
The composition of the white dwarf, also depends upon the original mass of the star. Different masses will lead to different compositions.
More precisely, the more massive is the star, the heavier are the elements composing the final object.
This is because more mass means more gravitational potential energy



$dU = - frac{GM(r)dm}{r}$



that in turns can be converted into heat.



The hydrogen nuclear fusion starts, for the proton-proton reaction(that is the dominant process for Sun-like stars) at around $10^7 K$. This is the value that allows the particles to overcome their coulombian barrier (i.e., to fuse).
After the hydrogen fusion, when the most of the core is composed by helium, then of course the hydrogen fusion can't happen anymore. The core starts to collapse, and heats itself. For a Sun-like star, there is enough mass to compress up to a level that heats the core enough to start the He burning. But that is all. When also the Helium is converted into Carbon, the star has not enough mass to compress again up to a level that starts another nuclear fusion reaction. This is why the core nuclear reactions stop. For the shell burning question, it is important to understand two things: $(1)$ the shell structure of a star is only an approximation, and $(2)$ there is a gradient of temperature within Sun-like stars, that means that (besides the corona) the temperature increases when you go from the outside to the core. Now, if the core is compressed and became so hot to burn helium, the shell "outside" the core (that in a onion-like schema was within the radius of the previous hydrogen burning core), is still hot enough to burn hydrogen. The size of the helium-burning core is smaller than the hydrogen-burning core (this is compression by definition). The shell has still enough hydrogen, and contemporary is deep enough inside the star (that means high temperature), to allow nuclear fusion of hydrogen.
If the star was more massive, more things could happen, like heavier elements core fusion, and more and more burning shells.



Take a look at these:
Ref 1, Ref 2.



Ref 3 for some numbers too.

Wednesday, 10 October 2012

general relativity - Is it possible to have two objects moving by sum speed of light (c) in the opposite direction?

I am puzzled by this question, because by the laws between the two object the relative gravitation force should be infinite. Anyways we can take two objects where the sum of speeds are equaling c (speed of light).



So the answer should be not, but many theoretical physicists making examples to demonstrate what happens if we closing to the speed of light. But what happens if we reach 90% of the speed of light and an object flies on the opposite direction by 10% of speed of light? By rational thinking there should be an extreme gravitational force between the two items.



All of these are true if I understood correctly the the General Relativity Theory.



Can somebody point out what is the problem with my thinking if there is?

Sunday, 7 October 2012

Can we see Earth by looking into space?

The answer, for the universe we live in, is almost certainly No. The best guess is that the universe, on the largest scale, is very nearly flat and probably extends infinitely in all directions.



The physics of general relativity does permit a hypothetical universe that curves back on itself in the way you describe, and if it were small enough, we could indeed see ourselves. But I don't know whether stars and planets could evolve in such a universe.



Incidentally, it doesn't have to even have curvature to wrap around as you describe. The universe could be like the game Asteroids, where you wrap off one edge and come back on the other. That has the topology of a torus, but is geometrically flat -- for instance, the angles inside a triangle still add up to 180 degrees.



But it doesn't look like our universe is like that.

mars - How are parachutes usable in other places than Earth?

Parachutes have never been used on the moon, but they are viable for Mars because Mars does have an atmosphere - albiet one much lighter than that of Earth. For that reason, parachutes cannot be the only means of slowing down on Mars --- for example, in the case of the Curiosity Rover, that's why an elaborate booster and crane combination was used (see figure). Another approach, taken by the Pathfinder Rover, was to use large airbags to bounce on landing.



One benefit to using parachutes on Mars, is that they can be used at higher velocities --- unlike on Earth where the heavier atmosphere (and thus stronger drag force) would shred a parachute at comparable speeds.



enter image description here

Friday, 5 October 2012

tidal forces - Configuration required to tidally heat Earth-like rogue planet

Do you need tidal heat? What about core heat?



Ice is a fair insulator. Put enough ice on top and the heat flux from the core can keep the bottom of the oceans liquid, no tides required.



If you need the water on the surface, I don't think it's possible. Even a huge amount of interior heat (whether from tides, radiation, or primordial) wouldn't be sufficient to hold the surface liquid. The Earth possibly had periods where there was no surface water (snowball earth), and that was with the advantages of an atmosphere and solar input.



The lack of a sun is going to make having an Earth-like atmosphere difficult. In an interstellar environment, any thin atmosphere will freeze out. Without an atmospheric blanket, the surface temperature plummets.



So I think you could have an ocean, but not with surface water.



One more thing. Assuming the current heat production of the earth, and its size, I calculate that something around 5.6km of ice is sufficient insulation. Water below that depth with average heat flow would be liquid (ignoring pressure effects and only assuming temperature and heat flow issues). That's a lot of ice, but not implausible.

Monday, 1 October 2012

universe - Origin of the original bright light and matter

I assume that what you want to know is "where did everything come from?" As for the light and the matter itself, we have some pretty good theories: Matter consists of many different particles, each of which were created from energy at different epoch during the first few moments of the Big Bang (BB). For instance, leptons were created from 1 to 10 seconds after BB, hadrons from 10-6 to 1 seconds after BB, and quarks even earlier. Light was created when the different particles annihilated with their corresponding antiparticles.



As to how the original energy — from which light and matter came — was created in the first place, there definitely is no well-grounded theory here, but either we must accept that



  1. the Universe came into existence out of nothing, or else

  2. there was something before that created the Universe, in which case we either
    1. just push the question further back, or

    2. must accept that whatever was there before has been there for eternity.


Option 2.2 is boring because I don't think there's any way of testing this theory, so option 1 or 2.1 seem most appealing, especially option 1, since time itself is thought to have been created along with space (though I think there are models where this is not necessarily the case).



Creating something out of nothing seems magical, but in fact it happens all the time. A vacuum is the lowest state of energy, but as a consequence of Heisenberg's uncertainty principle, in a finite time, the lowest state of energy cannot be zero, since ΔE Δt ≳ ℏ. This results in so-called quantum fluctuations. Usually, the energy popping up out of nowhere annihilates immediately, but there are ways of making it survive (e.g. on the border of a black hole). In a similar way, universes can be thought to pop up out of nowhere. Such a universe could annihilate immediately, but if inflated fast enough to a large enough size, it could grow to macroscopic size before it has time to annihilate.



If you want to read further, Wikipedia has some nice articles on the timeline of Big Bang and cosmogony (the theory of stuff coming into being)