Monday, 30 September 2013

soft question - How should I find a tutor for math-overflow level mathematics?

Searching for maths tutors online finds people willing to teach up to A-level. I'm looking for help at a more advanced level.



At the moment I'm trying to teach myself category theory from downloaded lecture notes, but I have my eye on other mathematical fields including having another go at algebraic geometry once my category theory is better. However, because I'm teaching myself, if I get stuck I have nowhere to turn. By the same token, I'm doing the exercises but it's frustrating when there's no-one to tell me if I'm getting the answers right or approaching it at the right level of rigor; I find myself missing being able to submit work and get it marked.



How might one go about hiring someone who might be able to give occasional help, either online or in person (I'm in London) at this level? I'm sure university maths departments have plenty of people doing postgrad maths who might like occasional work like this, but how might I go about reaching them?

How many bodies can I see with a 90x telescope?

The key issue for a telescope is not magnification but light gathering capability - which is (crudely) the size of the main aperture. Thus a 90x magification on a very large (wide) telescope would let you see a very large number of things (if you are in an area where the sky is dark), but 90x on a small telescope would let you see a number of interesting things (the Moon, planets, some nebulae and star clusters) but not relatively faint objects.



Small, cheaper, telescopes are still worth buying if you want to dabble, seeing the Moon through a telescope is always amazing and a small telescope can show lots, for instance.



Also, for a small telescope don't go for big magnifications - 90x would be fine for a low cost 50mm refractor for instance.



As for where the planets are - Venus, Mars, Jupiter, Saturn are generally easy to spot if you look up their positions online and know your way round the constellations. Because the planets are not, as stars effectively are, points at infinite distance, but small disks they (on most nights) do not twinkle as the stars do.

ag.algebraic geometry - "Spec" of graded rings?

From the discussion here, it seems that general Hochschild cohomology classes correspond to deformations where the deformation parameter can have nonzero degree.



So I have some naive and maybe stupid questions:



How can I interpret this geometrically? What is the "base space" of the deformation? What kind of object is it?



In other words, what is the "Spec" of a graded ring or a graded algebra (e.g. $k[t]$ or $k[[t]]$ or $k[t]/(t^n)$ with the variable $t$ having some nonzero degree)?



(..... maybe what I'm really asking is: Is there a theory of "schemes" where the "affine schemes" correspond to graded commutative rings rather than commutative rings? .....)

Sunday, 29 September 2013

the sun - What are the color progression of a typical sunrise?

These steps are in there for credit



I used Stellarium to simulate a sunrise, and took screenshots of the progression. Each screen shot was 2 seconds apart, and stellarium was accelerated to 10 minutes a second.



Then, using imagemagik to crop batch those 13 images with this command:



mogrify -crop 1280 1024 *.png


(those numbers are the size of my screen. I cropped to get rid of my second screen)



Next I put the images into Color Thief and got the pallet of each image. This is the result of those 13 images:





























Those are the colour pallets. You can pick whatever colours you want from them with a colour picker, such as KColorChooser.

exoplanet - Are there any Stars we know don't have planets?


I am beginning to assume that our solar system is not unique and that every star has several planets.




Not quite, but indeed a study published in Nature in 2012 found that, based on our observations so far, roughly 17% of stars host Jupiter-mass planets, 52% host "Cool Neptunes" and 62% host Super-Earths. (Note that these percentages do not add up to 100%, because they are not mutually exclusive possibilities). This was particularly surprising, because half of all visible stars are believe to be in binary systems, which would make planetary systems very unstable, but some binary systems have been found to have planets too.



So indeed it seems the majority of stars have planets, but it's very unlikely that all of them do.



However, the exact answer to your question "do we know of any stars with no planets" has got to be "no", because there remains a possibility that they have planets that we simply haven't been able to detect, because of limitations in our techniques to detect them.

Saturday, 28 September 2013

What is the minimum mass required so that objects become spherical due to its own gravity?

This question is more complicated than it seems like it should be!



There is no threshold mass or density beyond which an object becomes perfectly spherical; even supermassive stars are slightly oblong. The only exception is black holes, which are perfectly round up until you reach the quantum level. If we want a simple answer, most guesses are somewhere around $frac{1}{10000}$ the mass of earth, or $6cdot10^{20}$ kg, but that is very approximate and depends on the composition of the object.

galois theory - "Conjugacy rank" of two matrices over field extension

Suppose that the field extension $L/K$ is separable and that $K$ is
infinite.



Let $A$, $Bin M_n(K)$ and suppose there exists a matrix $Qin
M_n(L)$ is such that $QA=BQ$ and which has rank $Q=r$. We want to show there is a
matrix $Q'in M_n(K)$ such that $Q'A=BQ'$ and which has rank at least $r$.



First, by replacing $L$ by the subfield of $L$ generated over $K$ by the
coefficients of $Q$ if we need to, we can suppose that $L/K$ is a finitely
generated extension. By using a maximal purely trascendental extension of
$K$ contained in $L$ as an intermediate step, we see that it is enough to
consider separately the cases in which (i) $L/K$ is purely trascendental or
(ii) $L/K$ is finite.



In case (i), let $S$ be a trascendence basis of $L/K$. Since the matrix $Q$
has rank $r$, it has an $rtimes r$ minor $M$ with non-zero determinant. As the
entries of $Q$ are finitely many rational functions in a finite number of
elements of the indeterminates $S$, and since the field $K$ is infinite, we
assign values from $K$ to the indeterminates which appear in $Q$ in such a
way that we obtain a matrix $Q'in M_n(K)$ (ie, we avoid zeroes in
denominators) and such that the minor of $Q'$ corresponding to $M$ still
has non-zero determinant. It is clear that $Q'A=BQ'$ and that the rank of
$Q'$ is at least $r$, so we are done in this case.



Let us now consider case (ii). Up to enlarging $L$, we can assume that
$L/K$ is Galois, with Galois group $G$. As before, the matrix $Q$ has an
$rtimes r$ minor $M$ with non-zero determinant. Suppose the elements of
$G$ are $g_1=1_G,g_2,dots,g_n$, and consider the polynomial
$f(X_1,dots,X_n)=det_Mleft(sum_{i=1}^ng_i(Q)X_iright)in
L[X_1,dots,X_n]$; here the elements of $G$ act on the matrix $Q$ in the
obvious way, and $det_M$ denotes the determinant of the minor of its
argument corresponding to $M$. Notice that $f$ is not the zero polynomial, because the coefficient of $X_1^r$ is precisely $det_MQneq0$.



Since $L$ is infinite and the elements of $G$ are algebraically independent
(Lang, Algebra, VI, S12, Theorem 12.2), the map
$$ x in L mapsto f(g_1(x),dots,g_n(x))in L$$
is not identically identically zero.
It follows that there exists a $xiin L$ such that
the matrix $Q'=sum_{i=1}^ng_i(xi)g_i(Q)$ has $det_MQ'neq0$; in
particular, the rank of $Q'$ is at least $r$. Since the extension $L/K$ is
Galois and $Q'$ is fixed by all elements in $G$, we see that $Q'in
M_n(K)$. Finally, since the matrices $A$ and $B$ have their coefficients in
$K$, $Q'A=BQ'$.

Friday, 27 September 2013

co.combinatorics - How big is the sum of smallest multinomial coefficients?

If I'm not mistaken, you at least have the following bound:



$$
s(B) ; leq ; d^n ,-, exp left( -2 left( frac{ln B}{ln ( n! ) - dln ( (n/d)! )} right)^2 right) ,d^{n} .
$$



You may want to do a little fiddling around with Stirling's approximation to reduce the denominator in the fraction a bit further, but I didn't want to weaken the bound more than necessary.



I found this by upper-bounding the probability that a multinomial distribution with uniform bin probabilities yields an "atypical" observation, using Hoeffding's inequality. "Atypical" is here defined in the information-theoretic sense: An atypical observation is one whose logarithmic probability falls short of the expected logarithmic probability over the whole distribution (see Cover and Thomas' Elements of Information Theory, particularly chapter 11).



Some more details:



Let $p$ be point probabilities for a multinomial distribution with uniform bin probabilities:



$$
p(c) = left( frac{n!}{c_1!c_2!cdots c_d!} right) d^{-n}.
$$



Notice that



$$
ln p(c) ,+, nln d , = ln frac{n!}{c_1!c_2!cdots c_d!},
$$



and thus



$$
ln p(c) ,+, nln d ;leq; ln B ;quad iffquad frac{n!}{c_1!c_2!cdots c_d!} ;leq; B.
$$



Further, the entropy of the flat multinomial distribution is less than the entropy of $n$ samples from the flat categorical distribution with $d$ bins: The categorical includes order information as well as frequency information, while the multinomial only includes frequency information.



We thus have the following bound on the expected value of $-ln p(c)$:



$$
E left[, -ln p(c), right] leq
n cdot Hleft( frac{1}{d}, frac{1}{d}, ldots, frac{1}{d} right)
= nln d,
$$



or in other words, $-nln d leq E[ln p(c)]$.



Further, the minimum and maximum values of the random variable $ln p(c)$ are



$$
a
; =;
min_c ln p(c)
; =;
ln frac{n!}{n!, 0!, 0!, cdots, 0!} d^{-n}
; =;
-nln d;
$$



$$
b
; =;
max_c ln p(c)
; =;
ln frac{n!}{(n/d)!, (n/d)!, cdots, (n/d)!} d^{-n}
; =;
ln ( n! ) - dln ( (n/d)! ) - nln d.
$$



The squared distance between these two extremes is consequently



$$
(a - b)^2 = left( ln ( n! ) - dln ( (n/d)! ) right)^2.
$$



We can now make the following comparisons:



begin{eqnarray}
Prleft{ frac{n!}{c_1!c_2!cdots c_d!} < B right}
& = &
Prleft{ ln p(c) ,+, nln d < ln B right}
\
& leq &
Prleft{ ln p(c) ,-, E[ln p(c)] < ln B right}
\
& = &
1 ,-, Prleft{ ln p(c) ,-, E[ln p(c)] geq ln B right}
\
& leq &
1 ,-, exp left( -2 left( frac{ln B}{ln ( n! ) - dln ( (n/d)! )} right)^2 right).
end{eqnarray}



The first inequality follows from the fact that a proposition always has a lower probability than its logical consequences; the second is the application of Heoffding's inequality.



We thus have



$$
Prleft{ frac{n!}{c_1!c_2!cdots c_d!} < B right}
leq
1;-;exp left( -2 left( frac{ln B}{ln ( n! ) - dln ( (n/d)! )} right)^2 right).
$$



By multiplying by $d^n$, we obtain the inequality stated above, since the probability of a sample from a flat multinomial distribution is equal to the corresponding multinomial coefficient divided by $d^n$.

Thursday, 26 September 2013

photography - Simulating location of the moon

The Isaac Newton Group of Telescopes has a web resource for plotting the altitude of celestial bodies, given a date and your latitude, longitude, altitude, etc. It is used for general RA, Dec. angles, but it automatically plots the altitude of the moon as well, for any plot that you generate. It's called Staralt.

ct.category theory - Methods for constructing Frobenius structures

Let ${mathbb F}:({mathcal A},{mathcal E}_{mathcal A})to({mathcal B},{mathcal E}_{mathcal B})$ be an exact functor between exact categories, and suppose ${mathbb F}$ has both a left adjoint ${mathbb F}_lambda$ and a right adjoint ${mathbb F}_rho$. Then the class



${mathcal E} := {Xto Yto Zin{mathcal E}_{mathcal A} | {mathbb F}Xto{mathbb F}Yto{mathbb F}Zin{mathcal E}_{mathcal B}}$



defines another exact structure on ${mathcal A}$.



It is interesting to ask for criteria to decide when this exact structure is Frobenius. One such criterion is the following:



Suppose ${mathcal E}_{mathcal B}$ is the split exact structure, and that for each $Xin{mathcal A}$ the unit $eta_X: Xto{mathbb F}_rho{mathbb F}X$ is an ${mathcal E}_{mathcal A}$-monomorphism, while the counit ${mathbb F}_lambda{mathbb F}Xto X$ is an ${mathcal E}_{mathcal A}$-epimorphism. Then $({mathcal A},{mathcal E})$ has enough projectives and injectives, and the classes ${mathcal P}$/${mathcal I}$ of projective/injective objects in $({mathcal A},{mathcal E})$ are given by



${mathcal P} = {Xin{mathcal A} | Xtext{ is a summand of some }{mathbb F}_lambda , Yin{mathcal B}}$ and



${mathcal I} = {Xin{mathcal A} | Xtext{ is a summand of some }{mathbb F}_rho Y, Yin{mathcal B}}$,



respectively. Consequently, if ${mathbb F}_lambda$ and ${mathbb F}_rho$ have the same image, then $({mathcal A},{mathcal E})$ is Frobenius.



This seems very restrictive, but in fact there are at least two cases I know where it can be applied:



(1) If ${mathcal A}$ is a dg-category, then the forgetful functor ${mathbb F}: text{dg-mod}({mathcal A})totext{gr-mod}({mathcal A})$ fulfills the requirements of the criterion above and thus can be used to construct a Frobenius structure on $text{dg-mod}({mathcal A})$ (for pretriangulated dg-categories ${mathcal A}$, this structure can in turn be restricted to ${mathcal A}$ itself).



(2) If $G$ is a finite group, $H$ is a subgroup, then the fortgetful functor $Gtext{-mod}to Htext{-mod}$ has left adjoint $text{Ind}^G_H$ and right adjoint $text{Coind}^G_H$, and these two functors coincide for $(G:H)<infty$. In this case, the above criterion therefore applies to provide $Gtext{-mod}$ with a Frobenius structure "relative to $H$".



Question



Do you know more criteria for constructing Frobenius structures and situations where they can be applied?



For example, I would be interested in a criterion which can be applied to show that the category of maximal Cohen-Macaulay modules over a Gorenstein ring is Frobenius.

pr.probability - Sample from uniform distribution vs. Sample from random distribution

You're right that things change for m>1; I was thinking sloppily.



Assume $U={1,ldots,n}$ for concreteness. If $Y_1,ldots,Y_m$ are chosen independently and uniformly from $U$, then for any $k_1,ldots,k_min U$, we of course have
$$
Pr[Y_1=k_1,ldots,Y_m=k_m] = frac{1}{n^m}.
$$



On the other hand, if $x=(x_1,ldots,x_m)$ is chosen uniformly from the standard $n$-simplex and $Y_1,ldots,Y_m$ are then chosen independently according to $x$, then
$$
Pr[Y_1=k_1,ldots,Y_m=k_m] = mathbb{E}Pr[Y_1=k_1,ldots,Y_m=k_m|x]
= mathbb{E}prod_{i=1}^m x_{k_i} = frac{n!}{(n+r)!}prod_{j=1}^n r_j!,
$$
where $r_j = #{1le i le m : k_i=j}$ and $r=r_1 + cdots r_n$. This last expectation can be proved most easily from Lemma 1 in this paper.

Wednesday, 25 September 2013

computational complexity - Boolean Cube of Primes

The argument that gives you cubes in dense sets shows roughly speaking (via repeated applications of Cauchy-Schwarz) that the number of k-dimensional cubes in a set of density delta is at least something around $delta^{2^k}n^{k+1}$, which is the number you would get in a random set. (I am in fact giving the result for the integers mod n, but one can think of a subset of the first n integers of density δ as a subset of $mathbb{Z}_{2n}$ of density δ/2. So this says that the best dimension should be that k for which $delta^{2^k}$ is around $n^{-(k+1)}.$ Taking logs twice, that says (ignoring constants) that $k+loglog(1/delta)=log k+loglog n,$ or roughly $k=loglog n - loglog(1/delta).$ If $delta=1/log n,$ then that second term is not making much difference.



The worst case is for random sets. Although the primes are not random, one can make them more random, and denser, by applying the so-called W-trick (roughly speaking, you restrict to an arithmetic progression that contains no multiples of small primes, thereby increasing the chances that a number in that progression is prime). But this does not increase the density by enough to have a significant effect on the estimate you might get for k. If you're interested in $2^k$, then the question becomes more delicate and the W-trick might get you a useful, if smallish, improvement. But basically it looks to me as though your $loglog n$ estimate for the dimension should be right, up to a constant.

Tuesday, 24 September 2013

examples - Do good math jokes exist?

"Why did the chicken cross the Mobius band?"



The question isn't whether good math jokes exist, but whether they can be classified. The example above works because it plays on ones expectation of the "chicken crossing the road" jokes. Another one in the same vein, known as the shortest math joke:



"Let epsilon<0."



Another one, which I actually heard in class:



"Take a positive integer N. No wait, N is too big; take a positive integer k."



Here is a non-exhaustive classification of math jokes:



  • Puns on mathematical terminology

  • Mathematical reasoning in non-mathematical setting

  • Twists on expectations

  • Meta-jokes approached in a mathematical mode of enquiry

A joke can belong to more than one classification. For example, the "Dog and cow knot theorists" has both puns and a twist on expectations.



By the way, I would exclude jokes which are purely made on stereotypes, like the above joke on extrovert mathematician, because I don't find it funny.



I leave with one of my favorite meta-jokes:



"How many members of a certain demographic group does it take to perform a specified task? A finite number: one to perform the task and the remainder to act in a manner stereotypical of the group in question."

big list - Undergraduate Level Math Books

What are some good undergraduate level books, particularly good introductions to (Real and Complex) Analysis, Linear Algebra, Algebra or Differential/Integral Equations (but books in any undergraduate level topic would also be much appreciated)?



EDIT: More topics (Affine, Euclidian, Hyperbolic, Descriptive & Diferential Geometry, Probability and Statistics, Numerical Mathematics, Distributions and Partial Equations, Topology, Algebraic Topology, Mathematical Logic etc)



Please post only one book per answer so that people can easily vote the books up/down and we get a nice sorted list. If possible post a link to the book itself (if it is freely available online) or to its amazon or google books page.

ag.algebraic geometry - Extension of a holomorphic vector bundle

Let $E$ be a holomorphic vector bundle over $mathbb{P}^nsetminusbegin{Bmatrix}[1,0,0,cdots,0]end{Bmatrix}$. Let $D$ be a connection on $E$. Let $widetilde{E}$ be an extension of $E$. Since $widetilde{E}$ is reflexive, i.e. double dual of $E$ is isomorphic to itself, then up to isomorphism, $widetilde{E}$ is unique. My questions are



1)
Is it possible that $widetilde{E}$ is a vector bundle?



2) If $widetilde{E}$ is a vector bundle, does it admit a connection $widetilde{D}$ which is naturally induced by $D$?



Edit:For the first question, I just proved that $widetilde{E}$ is a vector bundle if and only if $widetilde{E}$ is splits. I am wondering if this result was already known. If so, does any one know any reference on this result?

Proof of the result that the Galois group of a specialization is a subgroup of the original group?

Here is a broader setup for your question. Let $A$ be a Dedekind domain with fraction field $F$, $E/F$ be a finite Galois extension, and $B$ be the integral closure of $A$ in $E$.
Pick a prime $mathfrak p$ in $A$ and a prime $mathfrak P$ in $B$ lying over $mathfrak p$.
The decomposition group $D(mathfrak P|mathfrak p)$ naturally maps by reduction mod $mathfrak P$ to the automorphism group $text{Aut}((B/mathfrak P)/(A/mathfrak p))$ and Frobenius showed this is surjective. The kernel is the inertia group, so if $mathfrak p$ is unramified in $B$ then we get an isomorphism from $D(mathfrak P|mathfrak p)$ to $text{Aut}((B/mathfrak P)/(A/mathfrak p))$, whose inverse is an embedding
of the automorphism group of the residue field extension into $text{Gal}(E/F)$.



If we take $A = {mathbf Z}$ then we're in the number field situation and this is where Frobenius elements in Galois groups come from.



In your case you want to take $A = {mathbf Q}[t]$, so $F = {mathbf Q}(t)$.
You did not give any assumptions about $f(x,t)$ as a polynomial in ${mathbf Q}[x,t]$. (Stylistic quibble: I think it is better to write the polynomial as $f(t,x)$, specializing the first variable, but I'll use your notation.) Let's assume $f(x,t)$ is absolutely irreducible, so the ring $A' = {mathbf Q}[x,t]/(f)$ is integrally closed. [EDIT: I should have included the assumption that $f$ is smooth, as otherwise $A'$ will not be integrally closed, but this "global" int. closed business is actually not so important. See comments below.] Write $F'$ for the fraction field of $A'$. After a linear change of variables we can assume $f(x,t)$ has a constant coefficient for the highest power of $x$, so $A'$ is the integral closure of $A$ in $F'$.



Saying for some rational $t_0$ that the specialization $g(x) = f(x,t_0)$ is separable in ${mathbf Q}[x] = (A/(t-t_0))[x]$ implies the prime $(t-t_0)$ is unramified in $A'$. Let $E$ be the Galois closure of $F'/F$ and $B$ be the integral closure of $A$ in $E$. A prime ideal that is unramified in a finite extension is unramified in the Galois closure, so $(t-t_0)$ is unramified in $B$. For any prime $mathfrak P$ in $B$ that lies over $(t-t_0)$, the
residue field $B/mathfrak P$ is a finite extension of $A/(t-t_0) = mathbf Q$ and since
$E/F$ is Galois the field $B/mathfrak P$ is normal over $A/(t-t_0)$. These residue fields have characteristic 0, so they're separable: $B/mathfrak P$ is a finite Galois extension of $mathbf Q$. I leave it to you to check that $B/mathfrak P$ is the Galois closure of $g(x) = f(t_0,x)$ over $mathbf Q$. Then the isomorphism of $D(mathfrak P|(t-t_0))$ with $text{Aut}((B/mathfrak P)/mathbf Q) = text{Gal}((B/mathfrak P)/mathbf Q)$ provides (by looking at the inverse map) an embedding of the Galois group of $g$ over $mathbf Q$ into the Galois group of $f(x,t)$ over $F = {mathbf Q}(t)$.



I agree with Damiano that there are problems when the specialization is not separable. In that case what happens is that the Galois group of the residue field extension is identified not with the decomposition group (a subgroup of the Galois group of $E/F$) but with the quotient group $D/I$ where $I = I(mathfrak P|mathfrak p)$ is the inertia group, and you don't generally expect a proper quotient group of a subgroup to naturally embed into the original group.

Monday, 23 September 2013

What was the orbit of the meteor that impacted the Moon on 2013 September 11?

Out of curiosity I looked into this a bit. North/South on the moon is pretty irrelevant for the objects orbit, but it struck at East 339 degrees (or West 21 degrees) near the Lubiniezky craters in the northwest part of Mare Nubium - which can be seen here - towards the center but a little bit in the south eastern part of the map.



enter image description here



and you can see the phase of the moon here. It changes slowly enough that a 24 hour range doesn't make a huge difference.



http://lunaf.com/lunar-calendar/2013/09/11/



but apart from the crater looking roughly straight on, it's probably impossible to say the orbit it came from because the angle of approach is too big an unknown. It hit from kind of both the earth side and sun side direction, so it could have been a temporarily captured asteroid around the earth or a small comet or asteroid coming back from around the sun. Either could have made that impact I would think.

the sun - Solar System formation, considering its and universe's age

Early in the life of the universe it is thought that star formation in "metal-free" gas favoured larger stars. These had short lives and very quickly enriched the interstellar and intergalactic medium with nucleosynthesis products.



In fact the main enrichment of the interstellar medium (ISM) in our own Galaxy is thought to have taken place some billions of years before the Sun was born. Stars that are being formed now have basically the same composition as the Sun. The plot below shows the iron abundance as a function of deduced age for stars in the Gaia-ESO spectroscopic survey (http://adsabs.harvard.edu/abs/2014A%26A...565A..89B). As you can see, there has not been too much change in iron abundance over the last 10 billion years.



Bergemann et al. 2014



I think the key to this is that most of the heavy elements in the gas in our Galaxy come from stars more massive than the Sun, and which have much shorter lifetimes as a result. [Stellar lifetime is roughly proportional to $M^{-2.5}$]. So several generations of these have lived and died (and must have done so).



To add some more detail. Almost all the elements heavier than Helium (known as "metals") are produced inside stars. To get back into the ISM you have to get the "processed" material out of the centres of the stars. This occurs in basically three ways. (i) Supernovae - the explosions associated with the final collapse of very massive stars ($>10 M_{odot}$). These have short lives $<50$ million years, so many generations of such stars can have lived and died before the Sun was formed. (ii) Giant star winds. These occur towards the ends of the lives of stars with mass of $<10M_{odot}$. Some fraction of processed material gets dredged out of the centre and expelled into the ISM. The lifetimes of these stars is 50 million to $>10$ billion years. Most of the enrichment is caused by stars about in the middle of this range. These stars go on to fade as white dwarfs. (iii) Type Ia supernovae. Explosions triggered by mass transfer on to a white dwarf (probably in binary systems) causing a detonation and the scattering of metals into the ISM. Progenitors have lifetimes similar to the giant stars in (ii)



The numbers of white dwarfs we see in the Galaxy and the deduced rate of supernova explosions that are occurring (e.g. from counting the number of neutron star remnants) pretty much matches up with what is required to produce the metallicity of the ISM and the Sun. But there are plenty of variables and unknowns and not everything is solved. For instance it looks quite likely that star formation needed to be higher in the past; and some parts of the Galaxy (e.g. the bulge) have a higher metallicity, thought to be due to a period of very vigorous high mass star formation 10 billion years ago.

soft question - Choice of adviser

One thing to keep in mind is that if there is no one at your school, you can often either have a local advisor and an advisor elsewhere, just an advisor elsewhere, or have a local advisor who will sign things while you're really talking to someone else.



As far as finding an advisor at your school, go to talks! Professors (at least here at U. Penn) give talks fairly regularly about what they're working on, and if they aren't, you can often email them to ask if they can talk to you about what they do, and then if what they do is like what you're interested in, you can talk to them about being your advisor. Another way to do it is to talk to professors who've taught classes that you've taken, and try to do a reading course with them.

Sunday, 22 September 2013

CCD in telescopes: Observation and Astro-photography

You will typically find that CCD cameras are sold separately to the telescopes. That way the telescope can be used visually or for astrophotography, by either fitting an eyepiece, or a camera. Most telescopes do ship with eyepieces, as they are much less expensive than a CCD and it is sort of expected, but most observers will typically replace this eyepiece with a better one anyway, so to all intents and purposes, when you buy a telescope you are just buying the Optical Tube Assembly and maybe a mount, and you add to that all the accessories you need to accomplish your objectives. Different CCDs will be used for different purposes, for instance planetary observations or deep space objects.



That telescope has a fork mount. Note that this constrains how much equipment you can put on the end of the telescope without getting in the way of the mount. It also is an Alt-Az mount, which means it is not polar aligned; stars will rotate in the field of view, requiring a field de-rotator in addition to a CCD.



It looks like a nice enough scope for visual observations, but if you are interested in astrophotography, you might want to hold off for a bit, maybe get a scope for visual use first, and maybe make use of well-set-up robotic astrographs that you can hire over the internet, like slooh.com and itelescope.net - get some experience using premium gear that you don't have to worry about setting up or maintaining, and while you do that, work out what gear you will be happy about getting for yourself. The fact you are asking this question about CCDs suggests you aren't yet ready to spend money on astrophotography gear...



Getting set up well for visual stuff can set you back hundreds... Getting set up for astrophotography can set you back many thousands, so you want to be well equipped to make good purchasing decisions. And you can rent a lot of time on some pretty good telescopes for that same $...

Which planet or moon has all resources that can be used to sustain life in a controlled biosphere?

No known planet except Earth can be colonized by a human civilization. There are at least three serious issues: temperatures at around 300K, an atmosphere of appropirate pressure, and damaging cosmic radiation (low gravity is also a worry for long-term human presence). Minerals are less of a problem (and water can be synthesized). Mars and the Moon are close enough to the Sun for pressurized greenhouses to provide an environment where humans could stay without space suit. However, the problem of cosmic radiation remains and such "biospheres" must forever dependent on supportive technology. Venus's inhospitable atmosphere makes it completely unfeasible.



Using robots to mine planets and moons in the Solar system is certainly a realistic and interesting option. Sending humans there is very expensive and not necessarily sensible. Moreover, it is not clear whether humanity should send any live (including microorganisms) to such places, as this would inevitably forever change their nature (like sending rabbits/cats to Australia); all ongoing space missions are sterilized before being sent on their way.

Friday, 20 September 2013

artificial satellite - How can any probe be safe at one million degree Centigrade temperature?

While the solar corona is very hot, it also has very low density: Wikipedia gives a ballpark figure of about 1015 particles per cubic meter, which, at 1 million Kelvins, translates to a pressure of about 0.01 Pa. That's a pretty good vacuum, comparable to that in low Earth orbit.



The low pressure means that the coronal plasma doesn't hold much heat that it could transfer to the spacecraft. In practice, a much bigger issue for heat management so close to the Sun is the intense sunlight that transfers a lot of heat to anything that absorbs it. As Gerald notes in his answer, the main way in which the Solar Probe Plus deals with that issue is by placing a highly reflective and heat-resistant shield on the Sun-facing side of the probe to shield it from sunlight, combined with an efficient cooling system and an elliptical orbit that only takes the probe close to the Sun for relatively short periods at a time.

Wednesday, 18 September 2013

exoplanet - What is the orbital path of the newly discovered star-less planet PSO J318.5-22?

There is no "orbital path" detected, that's why it is a "free-floating planet". There is no radial velocity mesured, but the informations given by its kinematic location show that it belongs to the beta Pictoris group, that is a stellar group.



For more dirty details, have a look at the submitted paper on PSO J318.5-22:
http://arxiv.org/abs/1310.0457



Comment:
Apart from that (the following reflects my personal opinion) the term "free-floating planet" is ill-chosen; it is clearly a very low-mass object, but since it is not orbiting around another larger object, it seems to me that the term "planet" is not pertinent. I think that it should be consider more than a "very low-mass brown-dwarf". The problem arises from IAU definition of a brown-dwarf and a exoplanet, that is probably not well-fitted for this kind of objects (and that is not really physical). You will notice, by the way, that this object is moving in a stellar group, which kind of reinforce my point.

Tuesday, 17 September 2013

ca.analysis and odes - Polynomials and L^p(R)

If one wants to do algebra, or symbolic computation, then polynomials are indeed the simplest type of function. But if one wants to do analysis, or numerical computation, then actually the best functions are the bump functions - they are infinitely smooth, but also completely localised. (Gaussians are perhaps the best compromise, being extremely well behaved in both algebraic and analytic senses.)



That said, I'm not sure what your question is really after. If you want a function space that contains the polynomials, you could just take ${bf R}[x]$. Of course, this space does not come equipped with a special norm, but polynomials, being algebraic objects rather than analytic ones, are not naturally equipped with any canonical notion of size. Due to their growth at infinity, any such notion of size would have to be mostly localised, as is the case with the weighted spaces and distribution spaces given in other answers.

ag.algebraic geometry - Can the étale topology ever be realized as an "honest" topology?

I think Senor Borger speaks truth.



Maybe I am talking non-sense, but maybe this could be a sketch of some way to proceed (maybe not....):



Key feature of a site which we cannot have in a classical topology is that there may be several distinct "inclusion" morphisms from a smaller open into a bigger one.



Hence, we get a "fail" as soon as we encounter a scheme admitting an étale open with several inclusion maps, for example the inclusion of some open into the whole scheme.



Suppose we have a scheme admitting a non-trivial étale covering (i.e. one with non-trivial étale fundamental group), this is also an étale open, but by the action of the étale fundamental group it has several non-equal inclusions into the whole scheme. So we have an effect which is impossible in a classical, "honest" as you say, topology.



In the case our scheme indeed has trivial étale fundamental group, pick a Zariski open which has non-trivial étale fundamental group. Basically, take some finite covering and remove the ramification divisor, the complement is Zariski open. Now compose this Zariski open with the finite covering (which has become étale as the ramification has gone to nowhere land), same problem occurs, so again we get a "fail". Hence, to get honest topology, we need to have trivial étale fund. group for the scheme and all its Zariski opens, well.... that sounds pretty close to being just some closed points again.

Monday, 16 September 2013

complex geometry - Relationship between Line Bundles with isomorphic ring of sections

Suppose two positive holomorphic line bundles $L_1 to X_1, L_2to X_2$ over two projective complex manifold $X_1, X_2$ have isomorphic ring of sections $R=R_1=R_2$ where $R_i=oplus_{m=0}^inftyGamma(X_i,mL_i)$. Isomorphism as graded ${mathbb C}$- algebras.



Is there any relationship betweeen $X_1$ and $X_2$? Eg, some morphism between them? How about relationship to $Proj R$?



Thanks.

gr.group theory - What methods exist to prove that a finitely presented group is finite?

Suppose I have a finitely presented group (or a family of finitely presented groups with some integer parameters), and I'd like to know if the group is finite. What methods exist to find this out? I know that there isn't a general algorithm to determine this, but I'm interested in what plans of attack do exist.



One method that I've used with limited success is trying to identify quotients of the group I start with, hoping to find one that is known to be infinite. Sometimes, though, your finitely presented group doesn't have many normal subgroups; in that case, when you add a relation to get a quotient, you may collapse the group down to something finite.



In fact, there are two big questions here:



  1. How do we recognize large finite simple groups? (By "large" I mean that the Todd-Coxeter algorithm takes unreasonably long on this group.) What about large groups that are the extension of finite simple groups by a small number of factors?

  2. How do we recognize infinite groups? In particular, how do we recognize infinite simple groups?

(For those who are interested, the groups I am interested in are the symmetry groups of abstract polytopes; these groups are certain nice quotients of string Coxeter groups or their rotation subgroups.)

Saturday, 14 September 2013

What is the upper and lower limit of temperatures found on stars?

The answer depends on what you'd want to consider as a "star." If you're just thinking about stars on the main sequence, then you can just refer to the classical stellar type letters, "OBAFGKM" (which has relatively recently been extended to accommodate the coolest brown dwarfs with the letters "LTY"), where O-stars are the hottest stars (~30,000 K) and Y-stars are the coldest, so-called "room-temperature" stars (~300 K).



Self-gravitating, gaseous objects are incapable of fusing deuterium below about 13 Jupiter masses, and thus simply collapse and cool perpetually (as is the case for all the giant planets in our solar system). These objects can be colder than 300 K but are not technically stars as they do not undergo nuclear fusion.



For stars that leave the main sequence, two possible outcomes are a white dwarf star or a neutron star, both of which are born extremely hot: White dwarfs are born with surface temperatures of ~10^9 K, whereas neutron stars are born with surface temperatures of ~10^12 K. However, both white dwarfs and neutron stars cool as they age, with the coldest known white dwarfs being ~3,000 K, and neutron stars cooling to ~10^6 K.



So to answer the first part of your question: The coldest known stars are Y-stars (i.e. brown dwarfs) and the hottest known stars are either O-stars or young neutron stars, depending on whether you consider objects that have left the main sequence or not.



And as for strict lower and upper limits, the coldest stars possible are likely black dwarfs, which are what white dwarfs become after cooling for a very long time (>10^15 years). The hottest stars are likely the newly-born neutron stars I previously mentioned, it is very difficult to get much hotter than 10^12 K because any excess energy is carried away via neutrinos.

Friday, 13 September 2013

pr.probability - Coefficients of holomorphic functions defined by Borel probability measures on the unit disc

Let be $mathcal M(partialmathbb D)$ denote the set of all Borel complex probability measures on $partialmathbb D$ (unit circle in the complex plane). Define a mapping $Phi:mathcal M(partialmathbb D)to ell^{infty}(mathbb N)$ by
$$
Phi(mu)=(c_0,c_1,ldots)
$$
where $c_k$, are the coefficients of the Taylor expansion (around $z=0$) of the function
$$
f(z)=int_{partialmathbb D}frac{1}{1-wz} dmu(w),
$$
Question: Who is the set $Phi(mathcal M(partialmathbb D))$ ? Given a sequence is there a simple criterion to verify if it belongs to this space?



Motivation: Let be $A=(a_0,a_1,ldots)inPhi(mathcal M(mathbb D))$, $g:Usubsetmathbb Cto mathbb C$ analytic with
$$g(z)=sum_{k=0}^{infty}b_kz^k$$
Define $A*g:Usubsetmathbb Cto mathbb C$ by
$$
A*h(z)=sum_{k=0}^{infty} a_kb_kz^k.
$$
The above integral representation, can be used to give a short proof of
$$
|A*h|_{U}leq |h|_{U},
$$
where $|g|_{U}=sup_{zin U}|g(z)|$.



PS:This question it is a generalization of a question that arose in a discussion at Area 51.



Edition: I am correcting the question, because in the previous version the integrals could have no meaning. In fact, I was looking for the criterion by a line integral representation.

Why does the focal length of a telescope have an effect on the magnification?

Sometimes this can be difficult to wrap your head around in Astronomy, as telescopes generally have a fixed aperture and focal distance, and simply use an eyepiece at the end to make a difference.



If you, instead, look at a camera you can get the concept quite quickly. DSLR cameras have swappable lenses and many lenses include non-fixed focal distances (zoom lenses). So as you zoom with the lens, you are effectively changing the focal distance of the camera, which results in an image zooming in and out. If you've been in photography for a while, you probably also realize that when you do this, it also effects your lighting as the greater your focal distance, the less total light you are actually utilizing. Here's a pretty rough (and simplified) diagram of what is happening:



Diagram of focal distance and resulting image



The crop size is defined by the sensor size. Any light that spreads beyond the sensor is not captured. By increasing the area of the image, the portion that is within the sensor decreases, resulting in a cropped image that appears to be zoomed in.



A telescope works in a very similar fashion, but the telescope has a fixed focal length and fixed aperture. As these are all fixed, the telescope is generally engineered to utilize all light, where as a DSLR level camera will be built to allow diverse lens selection, which results in a lot of light wasted (light that is cropped by not reaching the viewfinder or sensor). The light is finally adjusted by the eyepiece. As everything before the eyepiece is more or less fixed, the type of eyepiece is a direct trade between total light and image size.



It should probably also be noted, that 'More distance = More zoom' isn't true everywhere in optics. Microscopes, for instance, tend to be designed in the opposite fashion.

graph theory - Algebraic Proof of 4-Colour Theorem?

4-Colour Theorem. Every planar graph is 4-colourable.



This theorem of course has a well-known history. It was first proven by Appel and Haken in 1976, but their proof was met with skepticism because it heavily relied on the use of computers. The situation was partially remedied 20 years later, when Robertson, Sanders, Seymour, and Thomas published a new proof of the theorem. This new proof still relied on computer analysis, but to such a lower extent that their proof was actually verifiable. Finally, in 2005, Gonthier and Werner used the Coq proof assistant to formalize a proof, so I suppose only the most die hard skeptics remain.



My question stems from reading this paper by Robin Thomas. In it, he describes several interesting reformulations of the 4-colour theorem. Here is one:



Note that the cross-product on vectors in $mathbb{R}^3$ is not an associative operation. We therefore define a bracketing of a cross-product $v_1 times dots v_n$ to be a set of brackets which makes the product well-defined.



Theorem. Let $i, j, k$ be the standard unit vectors in $mathbb{R}^3$. For any two different bracketings of the product $v_1 times dots times v_n$, there is an assignment of $i,j,k$ to $v_1, dots, v_n$ such that the two products are equal and non-zero.



The surprising fact is that this innocent looking theorem implies the 4-colour theorem.



Question. Is anyone working on an algebraic proof of the 4-colour theorem (say by trying to prove the above theorem)? If so, what techniques are involved? What partial progress has been made? Or do most people consider the effort/reward ratio of such an endeavor to be too high?



I think it would be interesting to have an algebraic proof, even a very long one, particularly if the algebraic proof does not use computers. Given its connection to many other areas (Temperley-Lieb Algebras), the problem seems to be amenable to other forms of attack.

Thursday, 12 September 2013

pr.probability - What is the probability that two random walkers will meet?

First, if the walks are on a Cartesian grid, then there is a bipartitioning of the network into 2 subsets (say A & B) of sites each consisting of the neighbors of others. The 2 walkers then remain on different sets A & B if they start on different sets, so that the probability of meeting on the same site is 0 -- as already noted by D. Zare above. If a non-bipartite grid (say the triangular grid) is used for the random walk, then this qualification would not apply.



Second, in 1 dim starting on the same (say even) subset of sites, the pair of walkers is equivalent to a single walk taking steps of -2, 0, or +2 with respective probabilities 1/4, 1/2, 1/4 -- and the question devolves to whether this new single walk will ever hit the origin (given that the initial distance from the origin is even). Polya's proof tells us that this new walk will eventually hit the origin with ultimate certainty.



Third, in 2 dim starting on the same subset of sites, the pair of (original) walkers is equivalent to a single walk taking steps of appropriate sizes -- if the step for original walker 1 is s(1) & for original walker 2 is s(2), with joint probability (1/4)x(1/4), then the new walk takes a step S with a probability which is 1/16 times the number of (ordered) pairs s(1) & s(2) which add to give S. Again Polya's proof tells us that the ultimate probability of the new walk hitting the origin is 1.



In higher dimensions the probability again following Polya is strictly less than 1.



The whole idea is that Polya's proof is robust under certain modifications to the random walk. In 2-dim the walk can be modified to have different probabilities for horizontal & vertical steps -- or diagonal steps can also be allowed (though then the "parity" considerations do not apply). Polya's proof however fails if the walk is given an "inversion-nonsymmetric" bias, say with different probabilities in the east & west (or north & south) directions. It also seems to me that if the walk is on a fractal grid, that the certain return should be for dim less than or equal to 2, while uncertain return should apply for dim 2+eps for all eps>0. Questions of what happens with a Cartesian grid for which edges are randomly deleted (with some probability p) also seems interesting -- and I think has perhaps been considered in connection with "percolation theory".

riemannian geometry - Jacobi fields on a "bump surface"

ORIGINAL ANSWER DELETED



EDIT: I neglected to account for the need to parameterize by arclength. And I think I also misunderstood and thought that you wanted only the Jacobi field that fixes the center. You want to solve for an Jacobi field, given a point (away from the center) and a vector at that point, right?



So that's definitely not as easy as I thought. Here are my thoughts:



1) I think the already proposed surface given by a spherical cap glued to a pseudosphere is already a good enough question. In my experience you never really need a $C^2$ surface, and something with piecewise continuous curvature is almost always enough. I encourage you to try it.



2) As for the more general approach, I no longer have any easy answer, but here are some thoughts:



Let the surface be given by $(r,theta) mapsto X(r,theta) = (rcostheta, rsintheta, f(r))$. If $s$ be the arclength parameter along a radial geodesic, then $s'(r) = sqrt{1 + f'(r)^2}$. One Jacobi field $J_1(r,theta)$ is given simply by



$J_1(r,theta) = partial X/partialtheta = re_theta$, where $e_theta = (-sintheta, costheta, 0)$ is a unit vector field that is orthogonal to and parallel along any radial geodesic.



If we view $r$ as a function of $s$, then the Jacobi equation says that $r'' + Kr = 0$, where $K$ is the Gauss curvature. It suffices to solve for one more Jacobi field $J_2 = h(s)e_theta$ independent of $J_1$. The Jacobi equation for $J_2$ is given by $h'' + Kh = 0$. Since $r$ is already a solution, we can try to solve for $h$ using variation of parameters.



So the goal is to find an even function $f$ with an inflection point such that the function



$s(r) = int_0^r sqrt{1 + f'(t)^2} dt$



can be explicitly integrated and inverted. I suggest trying something like $f(r) = 1/(1+r^2)$.

gt.geometric topology - What (if anything) happened to Intersection Homology?

Intersection homology is alive and well in a large number of guises. It's true that a lot of the work trended to algebraic geometry, representation theory, and categorical constructions, such as perverse sheaves, through the 90s, but there also continues to be work in the more topological settings by people such as me, Cappell, Shaneson, Markus Banagl, Laurentiu Maxim, and many others. At least some of this work is dedicated to extending classical manifold invariants, such as characteristic classes, in a meaningful way to stratified spaces, such as algebraic varieties, and there is a lot of recent interest (though slow progress) in figuring out how intersection homology might tie into various algebraic topology constructions. There are also analytic formulations such as L^2 cohomology (initiated by Cheeger), and much more.



Here are some good references to get started in the area:



Books:
An Introduction to Intersection Homology by Kirwan and Woolf (mostly concerned with telling the reader about the fancy early applications to algebraic geometry and representation theory, but a great overview nonetheless)



Intersection Cohomology by Borel, et.al. This is a great serious technical introduction to the area and, to my mind, the canonical source for the foundations of the subject)



Topological Invariants of Stratified Spaces by Markus Banagl (topological but mostly from the sheaf point of view)



For an overview of state-of-the-art in intersection homology and related fields, I'm co-editing a volume on Topology of Stratified Spaces that will be published in the MSRI series. Unfortunately, it's not out yet, but look for it soon.



Papers:
The original papers of Goresky and MacPherson are quite good.



Topological invariance of intersection homology without sheaves by Henry King is a good introduction to the singular version of the theory.



And for a whole pile of recent papers, I'll shamelessly plug my own web site: http://faculty.tcu.edu/gfriedman/
and Markus Banagl's: http://www.mathi.uni-heidelberg.de/~banagl/



And many further references can be found from these locations.

Wednesday, 11 September 2013

ct.category theory - Why does twisting quasi-Hopf algebras work (as in majid's article)

You probably know all that I'm going to say, but I'll mention it anyway, for any other interlocutors:



I'm going to adopt Stasheff's recommendation ("Drinfel$'$d's quasi-Hopf algebras and beyond", same volume) that "Drinfel$'$d-twisting" be called "skrooching" (a transliteration and abbreviation of the Russian, which is roughly four syllables.). Also, I'm going to talk about associative algebras and quasicoassociative coalgebras, not the other way around, even though the reconstruction theorem lands in "co" territory: if you really want to do Majid's theorem, you have to think about the convolution multiplication.



Anyway, then the basic Drinfel$'$d idea is that you start with something that I think should be called a "weak bialgebra": an associative unital algebra $A$ and an algebra homomorphism $Delta : A to Aotimes A$. This gives a functor $otimes: Atext{-rep} times Atext{-rep} to Atext{-rep}$, which forgets to the usual $otimes$ on vector spaces.



A quasicoassociative bialgebra is a weak bialgebra along with a distinguished invertible element $psi in A^{otimes 3}$ so that $(text{id} otimes Delta)circ Delta = text{Ad}_psi circ (Delta otimes text{id})circ Delta$, where $text{Ad}_psi$ is the inner automorphism of $A^{otimes 3}$ by conjugating by $psi$ — then acting by $psi$ gives a natural $Atext{-rep}$ isomorphism $(Xotimes Y)otimes Z to X otimes (Yotimes Z)$ — and there's a pentagon for $psi$ that matches Mac Lane's pentagon.



Given any invertible element $varphi in A^{otimes 2}$, you can skrooch the weak bialgebra $(A,Delta)$ by $varphi$ to get a new weak bialgebra $(A,Delta^varphi)$, which is the same algebra and $Delta^{varphi} = text{Ad}_varphicirc Delta$. If $(A,Delta,psi)$ is quasicoassociative, then $(A,Delta^varphi,psi^varphi)$ is, with:
$$ psi^varphi = text{Ad}_phi psi quad text{where} quad phi = (varphiotimes 1) cdot (Delta otimes text{id})(varphi) in A^{otimes 3}$$
and $cdot$ is the multiplication in $A^{otimes 3}$. You can translate this pretty straightforwardly into the representation theory.



The easier observation is quasitriangularity. A weak bialgebra $(A,Delta)$ is quasitriangular if it comes equipped with a skrooch $rho in A^{otimes 2}$ such that $Delta^rho = Delta^{rm op}$. A (quasi)coassociative bialgebra is quasitriangular if additionally $rho$ satisfies two hexagons. If you skrooch a quasitriangular bialgebra $(A,Delta,rho)$ by $varphi$, then you get a new quasitriangular bialgebra $(A,Delta^varphi,rho^varphi)$, where $rho^varphi = varphi^{rm op}rhovarphi^{-1}$ (multiplication in $A^{otimes 2}$; $varphi^{rm op}$ is the obvious element with the two $A$s in $A^{otimes 2}$ flipped).`

Monday, 9 September 2013

at.algebraic topology - How To Calculate A Winding Number?

We have a closed curve C on the plane given by parametric equations: x=x(t), y=y(t), t changes between a and b, x and y are smooth functions.
We want to calculate the winding number of this curve around the origin.
The most natural way to do it is to calculate the path integral:



$$int_C frac{-y,dx+x,dy}{x^2+y^2}$$



However, this integral turns out to be too complicated to calculate. What should we do now? Are there any efficient and strong methods to quickly and calculate the winding number?



Thanks.

coordinate - Could someone explain RA/dec in simple terms?

A system projected into the sky



Pretend all the stars are painted on the inside of a large ball, and you are at the center of the ball. The imagined ball is called the Celestial Sphere. If the Earth wasn't blocking the lower half of your view, and the Sun wasn't making the blue sky so bright, you would be able to look at the stars in any direction.



First, the rotation of the Earth causes the imagined ball of stars to appear to move around the sky. If you lay on your back, (in the norther hemisphere,) with your head to the north, the stars slowly slide by from left to right. There are two points however, where the stars don't appear to move: These two points are directly above the Earth's north and south poles. These two points, projected out into the sky are labeled the North, and South, Celestial Poles.



Next, the Earth's equator is projected out into the sky and labeled the Celestial Equator.



Finally, sorting out the east-west positioning, (ie left-right if you lay on your back, head to the north), requires an arbitrary selection: There are an infinite number of circles one can imagine drawing on the sky. (For example, small circles the size of the moon, and also many much bigger circles.) The largest circle you can draw is called a "great circle", from geometry. Point at the North Celestial Pole, (way above and behind your head if you face south in the northern hemisphere,) and draw a line straight south to the horizon, then down below your feet through the South celestial pole and back up to the North Celestial Pole. You have just drawn one Great Circle, through the Celestial Poles, all the way around the Celestial Sphere.



But that Great Circle is just one of an infinite number of similar ones. One could start at the North Celestial Pole and draw a Great Circle a little to the left or the right. So astronomers have chosen one specific Great Circle and labeled one side of that circle as "zero degrees" -- or, "we will start measuring angles from here."



So how do we use this system?



"right ascension" is measured rightward, (from your left to your right if you face south in the northern hemisphere,) from the zero-degrees half of that chosen great circle. So given a right ascension, one can find the corresponding half of a great circle. For a given right ascension, you have half of circle selected from the North Celestial Pole, to the South Celestial Pole. (This is called a line of equal longitude.)



"declination" is measured from the Celestial Equator. The Celestial Equator cuts through the middle of your line of longitude. From the Celestial Equator it is 90° northward(+) to the North Celestial Pole, and 90° southward(-) to the South Celestial Pole. So we can measure from +90° (northward) to -90° (southward) along that line of longitude, from the Celestial Equator. (Don't be confused by "positive declinations"; 45° of declination is northward from the Celestial Equator.)



Notice that the system does not depend on where you are standing on Earth, nor on the time of day. If you have a Right Ascension and a Declination, you have an unambiguous spot specified on the Celestial Sphere. The system is actually very simple, but the rotation (time of day) of the Earth, and where you are geographically located needs to also be figured in.



What's over my head right now?



Face south, (if you're in the northern hemisphere.) What is your latitude -- geographically, how far north are you from the Earth's equator? If you are 40° north, then Celestial declination of 40° north is directly over your head. Declination of 50° is 10° further northward from directly over your head, etc. But for that declination, you have a great circle -- left-to-right across the whole sky that has that same declination.



Right ascension you simply have to look up. That arbitrarily chosen, zero-degree line, appears to spin around the sky every day at the Earth rotates. Also, over the course of a year, the Earth's orbit about the sun makes an extra apparent turn of the sky. (What is overhead mid-winter, is directly under your feet mid-summer!) So there's an offset -- what right ascension is overhead on this date at midnight, and how far (in hours) are we from midnight right now? Combining those two you can figure out what Right Ascension is over head at any time, on any date.



(potential editors: I've intentionally not used acronyms for NCP, RA, etc as that makes it more complicated for people learning this material.)

amateur observing - What are practical considerations for backyard radio-astronomy detection of black holes?

The resolution/error box. Radioastronomy has always been hindered by the resolution, because it is inversely proportional to the size of the telescope and making larger telescopes (even with interferometry) is not always easy. No amount of modern technology can substitute for a large effective diameter. (When I say effective I'm including interferometry here; either way one needs a large area to build on).



Let's take a look at Reber's initial paper1 where he first mapped the sky:



enter image description here



In the diagram on the left, from top down, the three peaks in the contour map are Cas A, Cyg A, and finally Sgr A. The latter two have black hole origins, the former is a supernova remnant.



The resolving power of Reber's telescope here seems to have been 6 degrees, and it was 31.4 feet in diameter (and he focused on a 1.9m wavelength).



Now, by the Rayleigh criterion, the angular resolution is proportional to the wavelength divided by the diameter. As mentioned before, this is the major limiting factor for radioastronomers, and will be what holds amateur radioastronomers from making great telescopes — amateurs usually don't have acres or land in which to build a good interferometer (let alone the precision), and single-dish telescopes can't be made too large by an amateur. One may notice that I'm citing rather old observations here, on old telescopes; however given that radioastronomy technology has not changed nearly as much as size has, it should be OK to compare amateur telescopes with the smaller telescopes of the past.



Now, Cyg A was the first to be identified as a black hole, even though the radio-brightness of Sgr A was discovered at the same time. I'm focusing the rest of my analysis on Cyg A for this reason, as it stands to reason that the first confirmed BH out of the brighter radio sources would have more prominent indicators that it is a black hole.



Let's have a look at Cyg A with a better resolution:



enter image description here



(From this paper2, using the 5km array)



Note that the black blob in the center is the actual galaxy (probably an optical photograph superimposed on the contour map).



We can see that the lobes are less than a minute wide. (The actual galaxy is around 50 arcseconds wide)



To me, the most interesting thing that one would like to see here are the gas jets coming from the central galaxy. As mentioned in my answer here, these radio-emitting gas jets are in a steady line over thousands of light years, indicating that they come from some sort of cosmic gyroscope which has been steady for very long. However, even with the Ryle telescope the 1969 people could not get a picture of them; just a slight hint of their existence from the shape of the lobes.



Alright, so no gas jets. What else can indicate a black hole? They could try to look at the lobes themselves. They do not directly indicate the existence of a black hole, but their shape hints towards them being formed from jets (this is pretty much in retrospect).



However, with lobe dimensions of less than an arc minute, there's not much an amateur can get here, either. It's possible that a really good amateur telescope would just manage to notice that there are two lobes, but not much else as far as I can tell.



The other interesting bits would be the central galaxy itself, but it's too small. In the optical region one may have a chance of seeing Baade's "colliding galaxies" (it only looks like a pair of colliding galaxies). The gravitational effects (lensing, etc) are really only visible in the optical and beyond, for it to be visible in the radio we would need to be very lucky and have a huge radio source pass behind Cyg A — not happening anytime soon.




I'm pretty sure that a similar analysis would work for Sgr A or any other black hole candidate; the gas jets would be too small for an amateur radiofrequency resolution, and the gravitational effects of the black hole would only work well in the optical and X ray frequencies.



1. Reber, G. (1944). Cosmic Static. The Astrophysical Journal, 100, 279.



2. Mitton, S., & Ryle, M. (1969). High resolution observations of Cygnus A at 2. 7 GHz and 5 GHz. Monthly Notices of the Royal Astronomical Society, 146, 221.

What is the right version of "partitions of unity implies vanishing sheaf cohomology"

There are several theorems I know of the form "Let $X$ be a locally ringed space obeying some condition like existence of partitions of unity. Let $E$ be a sheaf of $mathcal{O}_X$ modules obeying some nice condition. Then $H^i(X, E)=0$ for $i>0$."



What is the best way to formulate this result? I ask because I'm sure I'll wind up teaching this material one day, and I'd like to get this right.



I asked a similar question over at nLab. Anyone who really understands this material might want to write something over there. If I come to be such a person, I'll do the writing!




Two versions I know:



(1) Suppose that, for any open cover $U_i$ of $X$, there are functions $f_i$ and open sets $V_i$ such that $sum f_i=1$ and $mathrm{Supp}(f_i) subseteq U_i$. Then, for $E$ any sheaf of $mathcal{O}_X$ modules, $H^i(X,E)=0$. Unravelling the definition of support, $mathrm{Supp}(f_i) subseteq U_i$ means that there exist open sets $V_i$ such that $X = U_i cup V_i$ and $f_i|_{V_i}=0$.



Notice that the existence of partitions of unity is sometimes stated as the weaker condition that $f_i$ is zero on the closed set $X setminus U_i$. If $X$ is regular, I believe the existence of partitions of unity in one sense implies the other. However, I care about algebraic geometry, and affine schemes have partitions of unity in the weak sense but not the strong.



(2) Any quasi-coherent sheaf on an affine scheme has no higher sheaf cohomology. (Hartshorne III.3.5 in the noetherian case; he cites EGA III.1.3.1 for the general case.) There is a similar result for the sheaf of analytic functions: see Cartan's Theorems.



I have some ideas about how this might generalize to locally ringed spaces other than schemes, but I am holding off because someone probably knows a better answer.




It looks like the answer I'm getting is "no one knows a criterion better than fine/soft sheaves." Thanks for all the help. I've written a blog post explaining why I think that fine sheaves aren't such a great answer on non-Hausdorff spaces like schemes.

lo.logic - Is reflexivity of equality an axiom or a theorem?

I think it is important to consider. The axiom of extensionality is provided as a means of defining equality between sets. Roughly stated we say "A set X is equal to set Y if and only if they both have exactly the same elements." A more precise statement, taking into account that we'd like to quantify over domains that we can define, is this:



1) $X=Yleftrightarrow forall x in X (xin Y) wedge forall y in Y (yin X).$



This is just defining an equality relation for sets, and reflexivity comes from the definition rather than having to be be put forth as an axiom. For example taking X=X,



$forall x in X (xin X) wedge forall x in X (xin X),$



is pretty true.



We can say that reflexivity is a theorem here. It's one of the properties of an Equivalence Relation, which must obey reflexivity, symmetry, and transitivity. Our 'equals' relation on sets happily obeys these rules. I like to consider equality as a possible relation we can define about objects, with some properties that we want it to hold. If we were coming at things from the perspective of logic we might make these properties axioms and prove the existence of such relations as theorems. I'm not completely sure on this. But from the perspective of axiomatized set theory, the extensionality axiom implies the rest of the equality properties very concretely.

Sunday, 8 September 2013

linear algebra - Help me with this proof: Drop a printed map of the land on the land and there must be some common point.

Hi, I have a minor in math and this is not a homework problem - my prof mentioned it 5 years ago and I could not even begin to tackle it until I took a good intro to linear algebra (after work). Please try to adjust the answer to my level.



A map is a smaller version of the land: rotated and scaled down. The prerequisite is that the map (say of USA) lands entirely inside the land (and not partly in Canada or in the ocean). Another prerequisite might be that the map has no holes (is that really necessary?) and perhaps that it must be a convex region (however I doubt that this is needed.) Please help me eliminate these doubts and prove that $p in M, p in L$. Correct my notation if needed by editing this post.



Here is the notation that I came up with (let's see if LaTex likes me):



  • $L$ = land (region in $R^2$)

  • $M$ = map (region in $R^2$)

  • $T$ = transformation in $R^2$ (scalar times a rotation matrix)

  • $vec{s}$ = shift vector (since the overall transformation is generally non-linear)

  • $vec{p}$ = "pivot" - the point that does not change.

Now, I can solve for p uniquely by picking a non-trivial triangle on the land denoted by vectors $vec{l_1}, vec{l_2}, vec{l_3}$, locating the corresponding vectors $vec{m_1}, vec{m_2}, vec{m_3}$ on the map and writing out:



$vec{m_1} = T vec{l_1} + vec{s}$, $vec{m_2} = T vec{l_2} + vec{s}$, $vec{m_3} = T vec{l_3} + vec{s}$



I solve for T:
$T = [ m_1 - m_2 ; ; ; m_1 - m_3 ] [ l_1 - l_2 ;;; l_1 - l_3 ]^{-1}$
Because we picked a non-trivial triangle, it's area will be non-zero and the matrix on the right will be invertible. So, we can always solve for T.
We can also solve for s: $vec{s} = vec{m_1} - T vec{l_1}$, and finally for the pivot p: $vec{p} = (I - T)^{-1} vec{s}$. Since T = c * rotation matrix, where $c leq 1$, the only time when (I - T) does not have an inverse is when I = T (details omitted).



So, it seems that I can solve for a unique p.



  • Now, how do I prove that pivot $vec{p}$ will be inside both shapes/regions L and M?

  • Finally, what assumptions must I make about convexity of M, absence of holes, etc ?

P.S. Which undergraduate classes might have this as a homework problem?

Saturday, 7 September 2013

pr.probability - Difference between Beta Process and Dirichlet process

I'm trying to understand the definition of a Beta process, as given in the paper:
www.ece.duke.edu/~lcarin/Paisley_BP-FA_ICML.pdf



The problem is that from the definition it follows that every Dirichlet process is also a beta process, which seems, ahm, wrong. Can you help me figure out what I don't understand?



This is the definition from the paper:
"Let $H_0$ be a continuous probability measure and α a positive scalar. Then for all disjoint, infinitesimal partitions, $B_1 ,ldots, B_K$, the beta
process is generated as follows,
$H(B_k) sim Beta(alpha H_0 (B_k ), alpha(1 − H_0(B_k )))$



with K → ∞ and $H_0(B_k)$ → 0 for $k = 1,ldots,K$. This process is denoted $H sim BP(alpha H_0 )$."



This is the definition for a Dirichlet Process (DP):



If $X sim DP(alpha H_0)$ where $alpha$ is a scalar and $H_0$ is a probability distribution, then for every finite partition $A_1,ldots,A_K$ it follows that
$(X(A_1),ldots,X(A_K)) sim DIR(alpha H_0(A_1), ldots,alpha H_0(A_K))$.



So let's assume that I have $Xsim DP(alpha H_0)$. Given any partition $B_1 ,ldots, B_K$, and any $k = 1 ldots K$, I can define a partition $A_1 = B_k, A_2 = Omega -B_k$ and from the DP definition it follows that



$(X(A_1),X(A_2)) sim DIR(alpha H_0(A_1), alpha H_0(A_2))$
which is equivalent to saying that
$X(B_k) sim Beta(alpha H_0(B_k), alpha(1-H_0(B_k)))$



hence $Xsim BP(alpha H_0)$. Where is my mistake?

abiogenesis - Is early life required for life?

To answer your question directly, it is quite unlikely that planets would be habitable without already having life. It would have to be in a near-perfect condition, which would be statistically unlikely. In case of planets already having life, the existence of life provides robustness to small changes and makes habitability possible in a larger range of conditions.



I am assuming 'habitable' here means habitable for man, trees and an ecosystem etc. Remember that there are extremophiles (or certain other hardy species) which completely change the perspective on habitability, and are often the key in making those places habitable for other forms of life.



This is also true for parts of earth which undergo a sudden change and are not 'habitable' for most forms of life till some hardy species takes over and changes the conditions, and consequently allows other species to be able to survive. See, for example, ecological succession (https://en.wikipedia.org/wiki/Ecological_succession), esp. primary succession and pioneer species (follow the links within the article).



Also, this is a long process. In case of the earth, there was also evolution that needed to happen, so it probably won't take as long as a few billion years, but even on earth, it takes a few years to make small disturbed microhabitats habitable. The atmosphere and other factors (soil, moisture etc., for the plants) need to be made suitable for living. Check this interesting article on terraforming: https://microbewiki.kenyon.edu/index.php/Terraforming



Hope this was helpful. I don't really know if a carbon cycle is possible outside of living organisms in some way.

Friday, 6 September 2013

teaching - Where to buy premium white chalk in the U.S., like they have at RIMS?

Following on S. Okada's post, I recently ordered 12 boxes (864 pieces) for just under $25 per box, or 3 sticks/dollar. Rakuten only ships within Japan, but Tenso
will forward the package abroad for roughly twice the cost of the package itself. However, Tenso is having a promotion till 28 Feb 2012, first 5000 yen discounted to 1000 yen. For a few boxes, jump on this, this is your chance. I went way over so it didn't save me much.



Even though Rakuten won't ship the chalk abroad, their English web form requires a complete US address before a person gets to look at the order. This is easy: Give your US address, but in the comments field, give your Tenso address. This worked just fine for me.



My credit card company put a fraud alert on Rakuten's charge, which canceled the order. Somewhat reasonable if Rakuten doesn't ship to the US, where I am. I had to place the order again, after clearing the fraud alert. Better to warn your credit card company in advance.



My breakdown, 12 boxes was 7092 yen plus 300 yen shipping to Tenso, or 92.62 USD. It arrived within a day. Tenso to NY was 17300 yen shipping plus 2980 yen handling, minus my 4000 yen coupon, or 204.41 USD. If anyone can parse a "slow boat" shipping option that costs less, that would help. Otherwise, I have word of a new importer to the US that should have stock in May, also $25 per box but less adventure to order. I'll post again when that's official.

How accurate are rederings of something entering Earth's atmosphere?

In the movies you reference, I doubt that it was ever the intention of a movie maker to suggest that the craft was moving at a crawl when it began to enter the atmosphere. If the craft doesn't look like it is moving very fast, the idea is probably that the point of reference (the "camera") is moving at nearly the same speed as the craft, or perhaps that the craft is so huge, that the reference point is miles away. Everything is relative.



There is essentially no realistic scenario where any object would enter the atmosphere at such a slow speed. Imagine, for example that you have an object, like Felix Baumgartner or something, which is standing approximately stationary in space above the Earth's atmosphere. Gravity will immediately begin pulling on that object and because there is no atmosphere to slow the object down yet, it just keeps accelerating as it falls. Even as it enters the edge of the atmosphere, it doesn't slow down much, because there isn't much atmosphere yet. Felix was moving more than 800 mph before he started to slow down.



The International Space Station is another object that's sort of "stationary", in a manner of speaking. It's not going anywhere because it's in a stable orbit (sort of). But in order to stay up there, it's actually moving more than 17,000 mph, relative to the atmosphere. If something is "stationary" with respect to the sun, just sitting there in Earth's orbit as Earth comes at it, the Earth's atmosphere would smack it at 67,000 mph. That's how fast Earth orbits the sun. So there's really isn't anything that would just gradually drift into the atmosphere at 1 to 10 mph.



If a craft were to use rockets to halt itself before entry, as you suggest, remember Felix again; it can become stationary with respect to the Earth, but then gravity begins its work, so if the craft wants to remain at a very slow speed through the entire decent, it has to keep firing those rockets all the way down. And certainly, nothing would burn up at that pace, other than a lot of rocket fuel. What causes the burning is not something special and hot about the atmosphere, but the fact that the object is smashing into the air (thin as it may be at that altitude) at thousands of mile per hour. It is even theoretically possible for meteors to enter at such speeds that the atoms are literally forced to hit each other's nuclei, and a part of the fire generated would actually a brief nuclear reaction.



Response to Question Edit:
(Thank you HDE. The initial question was very unclear.)
The speed at which friction completely stops is 0. Any movement at all, even in that 1 - 10 mph range, produces a small amount of heat from friction, even if the medium you are moving through is very thin air (though certainly nothing one would characterize as "burning"). You seem to be looking for the speed at which that friction would be low enough to not cause a problem, but there's no magic answer. It depends on the situation. How delecate is the craft? How much wear is acceptable? Essentially, this question becomes something very similar to asking "How fast can I drive with something sticking out the window?" You are trying to keep the entry speed low enough that the wind is not a problem for you as the density of that wind gradually increases from practically nothing to surface air density. If your craft has a heavy, durable heat sheild, properly oriented, then extraordinary speeds are still not a real problem. But if you are trying to create a sand mandala on the roof of your craft as you descend, then you are going to want to control that wind level very carefully.

Wednesday, 4 September 2013

planet - Why, in the Solar System, all the mass seems to be concentrated at the centre?

I feel that HDE gives a good start of an answer, but stops short of the important part. We have seen in HDE's answer the formation of a star at the center of the collapsing molecular cloud. When the star begins to fuse lighter elements, the protoplanetary disk has several forces acting on it:



  • The momentum of the particles in the disk.

  • The gravity of the star in the center and other particles of the protoplanetary disk on the other side.

  • The gravity of the particles of the protoplanetary disk opposite the sun (outward).

  • The radiation pressure of the new star.

Interestingly, the second and third forces pretty much cancel out when the disk is still uniform in distribution. But once large clumps of matter accumulate those large masses (protoplanets) gravitationally pull at the other matter in inconsistent and sometimes violent ways, especially when multiple protoplanets align (once per orbit of the inner body).



Thus, the area of the protoplanetary disk is being 'swept up' of all the mass: some is pulled into the sun by gravity and loss of momentum due to collisions, some is pushed out of the solar system by radiation, and whatever matter was not disturbed by one of those processes is then subject to being disturbed by the gravity of the protoplanets themselves. The protoplanets will, over time, either absorb that matter or gravitationally slingshot that matter out of the system or too it's very edges.



In short, the protoplanetary disk within a few tens of AU of the star in the center is a chaotic mess of a place. Not much matter can form a stable orbit there.

limits and colimits - Coend computation continued

I agree with Reid's answer, but I want to add a bit more.



Putting Reid's calculation into a more general setting, if $A$ is any category then
$$
int^{a in A} mathrm{hom}_A (a, a)
= (mathrm{endomorphisms in } A)/sim
$$
where $sim$ is the (rather nontrivial) equivalence relation generated by $gh sim hg$ whenever $g$ and $h$ are arrows for which these composites are defined.



You can see confirmation there that your instinct about traces was right. If we wanted to define a 'trace map' on the endomorphisms in $A$, it should presumably satisfy $mathrm{tr}(gh) = mathrm{tr}(hg)$, i.e. it should factor through $int^a mathrm{hom}(a, a)$.



In fact, Simon Willerton has done work on 2-traces in which exactly this coend appears. See for instance these slides, especially the last one.



You can see in that slide something about the dual formula, the end
$$
int_{a in A} mathrm{hom}_A(a, a).
$$
By the "fundamental fact" I mentioned before, this is the set
$$
{}[A, A](mathrm{id}, mathrm{id}) = mathrm{Nat}(mathrm{id}, mathrm{id})
$$
of natural transformations from the identity functor on $A$ to itself. Evidently this is a monoid, and it's known as the centre of $A$. For example, when $G$ is a group construed as a one-object category, it's the centre in the sense of group theory. So your set might, I suppose, be called the co-centre of $A$.

Tuesday, 3 September 2013

pr.probability - Conditional probabilities in Banach spaces

This is the infinite-dimensional sequel to my question, Conditional probabilities are measurable functions - when are they continuous?.



Let $Omega = Omega_1 times Omega_2$ be a probability space which is Banach, $mathcal F$ the Borel $sigma$-algebra on $Omega$, and $mathbb P$ a probability measure which need not be the product measure. e.g., $Omega = C(U_1) times C(U_2)$ for disjoint, compact $U_1, ~U_2 subseteq {mathbb R}$. Let $mathcal F_2 = sigma(Omega_2^*)$, where $Omega_2^*$ is the dual space to $Omega_2$. In the case of continuous functions, this is the $sigma$-algebra generated by the evaulation maps $pi_x$ for $x in U_2$. Note that these evaluations $pi_x$ are random variables on $Omega$.



Goal: I would like an explicit expression for conditional expectations with respect to $mathcal F_2$. Namely, I want a "reasonable" linear operator $P : Omega^* to Omega^*$ such that $$(*) qquad mathbb E(pi_x | mathcal F_2) = Ppi_x.$$Ideally, "reasonable" will mean continuous.



I can do this in the case that $Omega$ is a Gaussian Hilbert space. Decompose the covariance operator of $mathbb P$ as $$K = binom{K_{11} ~ K_{12}}{K_{21} ~ K_{22}}.$$(This should be matrix formatted but that doesn't seem to work here). Let $$P = binom{~~~0 ~~~~~~~~ 0}{K_{22}^{-1} K_{21} ~~ I_2},$$ where $I_2$ is the identity operator on $Omega_2^*$. Then using the Gaussian structure, I can show that $(*)$ holds; using the technology in Anderson & Trapp's Shorted Operators, II I can show that $P$ is continuous.



This is overkill! I think I can adapt my Gaussian calculation without too much trouble to the generic case (since it only deals with covariance operators). On the other hand, I don't know how to show that such an operator $P$ is continuous without relying on heavy-duty functional analysis. Surely this has been studied before, but I can't seem to find a good reference.



Note: Brownian motion is a very special case of this, where $Omega_2 = C(${$0$}$) = mathbb R$, the starting value $B_0$. There, one skirts the issue of this conditioning by the Markov property: the "future" $Omega_1 = C((0,infty))$ is independent of the "present" $Omega_2$ up to the starting value $B_0$.



Note: The variance of $pi_x$ for $x in U_1$ is the Schur complement $pi_x^* (K_{11} - K_{12}^* K_{22}^{-1} K_{21})pi_x$.

dg.differential geometry - How do I make the conceptual transition from multivariable calculus to differential forms?

I have struggled with this question myself, and I couldn't find a perfectly satisfactory answer. In the end, I decided that the definition of a differential form is a rather strange compromise between geometric intuition and algebraic simplicity, and that it cannot be motivated by either of these by itself. Here, by geometric intuition I mean the idea that "differential forms are things that can be integrated" (as in Bachmann's notes), and by algebraic simplicity I mean the idea that they are linear.



The two parts of the definition that make perfect geometric sense are the d operator and the wedge product. The operator d is simply that operator for which Stokes' theorem holds, namely if you integrate d of a n-form over an n+1-dimensional manifold, you get the same thing as if you integrated the form over the n-dimensional boundary.



The wedge product is a bit harder to see geometrically, but it is in fact the proper analogy to the product measure. Here's how it works for one-forms. Suppose you have two one-forms a and b (on a vector space, for simplicity). Think of them as a way of measuring lengths, and suppose you want to measure area. Here's how you do it: pick a vector $vec v$ such that $a(vec v) neq 0$ but $b(vec v) = 0$ and a vector $vec w$ s.t. $a(vec w) = 0$ but $b(vec w) neq 0$. Declare the area of the parallelogram determined by $vec v$ and $vec w$ to be $a(vec v) cdot b(vec w)$. By linearity, this will determine area of any parallelogram. So, we get a two-form, which is in fact precisely $a wedge b$.




Now, the part that makes no sense to me geometrically is why the hell differential forms have to be linear. This implies all kinds of things that seem counter-intuitive to me; for example there is always a direction in which a one-form is zero, and so for any one-form you can draw a curve whose "length" with respect to the form is zero. More generally, when I was learning about forms, I was used to measures as those things which we integrate, and I still see no geometric reason as to why measures (and, in particular, areas) are not forms.



However, this does make perfect sense algebraically: we like linear forms, they are simple. For example (according to Bachmann), their linearity is the thing that allows the differential operator d to be defined in such a way that Stokes' theorem holds. Ultimately, however, I think the justification for this are all the short and sweet formulas (e.g. Cartan's formula) that make all kinds of calculations easier, and all depend on this linearity. Also, the crucial magical fact that d-s, wedges, and inner products of differential forms all remain differential forms needs this linearity.



Of course, if we want them to be linear, they will be also signed, and so measures will not be differential forms. To me, this seems as a small sacrifice of geometry for the sake of algebra. Still, I don't believe it's possible to motivate differential forms by algebra alone. In particular, the only way I could explain to myself why take the "Alt" of a product of forms in the definition of the wedge product is the geometric explanation above.



So, I think the motivation and power behind differential forms is that, without wholly belonging to either the algebraic or geometric worlds, they serve as a nice bridge in between. One thing that made me happier about all this is that, once you accept their definition as a given and get used to it, most of the proofs (again, I'm thinking of Cartan's formula) can be understood with the geometric intuition.



Needless to say, if anybody can improve on any of the above, I'll be very grateful to them.



P.S. For the sake of completeness: I think that "inner products" make perfect algebraic sense, but are easy to see geometrically as well.

Monday, 2 September 2013

accretion discs - Generalised planets?

There is somewhat of an abstract way of generalising the notion of planets.



Standard definition of planets is, obviously: "planets are the objects formed from the residual material surrounding a newly formed main sequence star through a complex formation process".



Let me introduce a bit more general, by no means commonly known, definition of "generalised planets as bound macroscopic objects formed from some former accretion process taking place near some gravitating body over many orbital periods".



Clearly, the latter definition is more general, as it doesn't specify the type of accretion event and the type of the central body. Now, for example, the accretion mechanism can be very different from that in protoplanetary discs: consider, for example, accretion on a compact object (a neutron star or a black hole), accrection on supermassive black hole, accretion from remnants of binary neutron stars merger, or any other possible accretion process.



As noted in some answers, some stars also form in accretion-involving processes in the vicinity of other stars. To avoid including stars into 'generalised planet' category, the definition specifies that the objects should be formed over many orbital periods, implying that they should be sufficiently bound with respect to the primary.



The question: Are there generalised planets other than planets/exoplanets?

Sunday, 1 September 2013

soft question - Your favorite surprising connections in Mathematics

Here is one of my favorite, that I learned from A. G. Khovanskii: let $f$ be a univariate rational function with real coefficients. Then, you can think of $f$ as inducing a continuous self-map of $mathbb{RP}^1 cong S^1$, in particular, it has a topological degree, say $[f]$, and if $f$ happens to be a polynomial, it is obvious that $[f]=0$ if $deg(f)$ is even, and that $[f]=pm 1$ if $deg(f)$ is odd (depending on the sign of the main coefficient).



If the decomposition of $f$ in continued fraction is
$$ f=P_0+frac{1}{P_1+frac{1}{P_2+ddots}}$$
Then one can prove easily that $[f]$ is the (finite) sum: $[f]=sum_{i geq 0} (-1)^i[P_i]$.
(Khovanskii himself taught this to high-schoolers in Moscow.)



The interesting connection for me follows: for any real polynomial $P$, the topological degree of the fraction $P'/P$ is clearly the (negative of the) number of real roots of $P$. Thus, the computation formula above applied to $[P'/P]$ allows us to recover Sturm's theorem.



I don't know if it really qualifies as a new proof of the theorem, but it's definitely a different point of view on that proof.

ag.algebraic geometry - Nefness of $h-e$ in the blowup of $mathbb{P}^n$

As a complement to JVP's answer, here is a direct proof to that $tilde{H}cdot Cgeq 0$.



Note that nefness is numerically invariant. To check the nefness of $h-e$, we only need to show that for any irreducible curve $C$ in the blowing-up, the intersection number $(h-e)cdot C$ is nonnegative. If $C$ is not contained in $e$. then the image of $C$, denoted by $D$, is still a curve. In this case, by projection formular, $(h-e)cdot C=Hcdot Dgeq 0$, where $H$ is any hyperplane in $mathbb{P}^n$. In the case $C$ is contained in $e$, $hcdot C =0$ however $-ecdot C =-deg N_{e/X}|C=1$, where $N_{e/X}$ is the normal bundle of the exceptional divisor in the blowing up $X$. Therefore $h-e$ is nef.

Loss of atmosphere on Mars

The loss of the Martian atmosphere can be mostly attributed to its mass. The reason why Earth still has an atmosphere made of lighter elements is because with larger mass comes larger escape velocity, which is the speed at which an atom's kinetic energy overcomes the gravitational potential energy of its planet.



The distribution of speeds of most gasses can be described by the Maxwell–Boltzmann distribution.



Maxwell-Boltzmann



This curve represents the probability of finding a particle with a given speed at a given temperature. Holding temperature constant, the above chart illustrates that lighter molecules will have higher probabilities of being found with higher speeds.
$$P(epsilon) = frac{2}{sqrt{pi}k_{B}T} left( frac{epsilon}{k_{B}T} right)^{1/2} exp{-left(frac{epsilon}{k_{B}T}right)} $$
where $epsilon$ is the energy of the particle, $frac{p^{2}}{2m}$ in the absence of a gravitational field (see this lecture for information regarding how to include the gravitational term).



Integrated over time, the lighter gasses tend to exceed the escape velocity more often than their heavier counterparts. This is why the larger planets like Jupiter and Saturn still have atmospheres dominated by hydrogen and helium.

the sun - How do you define the diameter of the Sun

I thought I'd contribute an answer because there's a very recent paper on the subject:



Measuring the solar radius from space during the 2012 Venus Transit



It appeared in my RSS feeds this morning! A related writeup is online at the HMI website.



To answer the question, this measurement uses the transit of Venus to fit the limb-darkening law of the Sun. That is, the Sun is a bit fainter the further from the centre that you look. As you reach the optically thinner layers near the "surface", the brightness falls off rapidly, towards zero in the vacuum of space. The inflection point of the curve (as a function of distance from the centre of the disk) is a reasonable estimate of the "radius". As pointed out elsewhere, the value changes depending on which wavelength you use, but only by a few hundred km, compared to the Sun's overall radius of about 700 000km (actually more like 695 946 km), so the uncertainty is at or below the 0.1% level. Phil Plait wrote about a similar measurement (by the same team, I believe) that used the transits of Mercury in 2003 and 2006.



Finally, the team also used the limb-darkening (I think) to measure how round the Sun is. i.e. the diameter from top-to-bottom versus left-to-right. Answer: the Sun is very very round, with the radii differing by a few parts per million.