Tuesday, 30 April 2013

if the big bang only expanded the universe when and how did it originate?

"before" is not the right word. It suggests that there was an arrow of time which allowed to trace events. But time was created only with the big bang, there was no before.



We don't know where the matter came from, or what caused the Big Bang. One hypothesis suggests that it was caused by the collision of branes in other universes, but there's no way to verify this, so in a strict sense it isn't science. Many researchers think we'll never have an answer this.

Monday, 29 April 2013

the sun - The formation of absorption lines in solar spectrum

It is said that absorption lines originate from regions higher in the photosphere where the gas is cooler.



The gas should be absorbing photons, then re-emitting them... absorb, re-emit, absorb, re-emit...



My question is where are the photons produced by re-emission? Why are there absorption lines?



Although photons could be scattered, photons at other places could be scattered to our line our sight. Surely the total number of photons does not drop the further out you go? So while photons originally headed in the direction of an observer are possibly absorbed and re-emitted in a direction away from the observer, surely this will be made up for by a photon being emitted from another location in the direction of the observer? Otherwise wouldn't there be a location somewhere else where a bunch of extra scattered photons can be observed?



If we integrate all the photons at the lower surface of the photosphere A and integrated all the photons at the upper surface of the chromosphere C, surely there are the same number of photons? Otherwise, where have they gone? Where is the lost energy?



Are there less photons emitted from the surface C because some have been emitted back down into the Sun? But that has to reach an equilibrium, or there would be a cascading increase in the number of photons inside the sun.



Diagram showing photosphere, chromosphere, and photons

telescope - What range of exit pupils work for observing the full moon?

Light pollution does not matter for the Moon. Even transparency doesn't matter that much. What does matter is seeing, a.k.a. air turbulence.



It is very rare that an exit pupil smaller than 0.5 mm is useful for anything - perhaps for some tight double stars, but that's about it. So take that as a hard lower limit.



In terms of a "soft" limit, it depends. If seeing is excellent, if the optics are very good, if the instrument is in perfect collimation and at perfect thermal equilibrium, then you can push e.p. below 1 mm, especially for a small aperture like the range you mention.



If seeing is less than perfect, if optics are so-so, if the instrument is miscollimated or too hot, then the minimum usable e.p. goes up. Each one of these factors influences the observation, so it's hard to pin a number.



Bottom line is, use what works for you.




claims that 1-2mm works best for the moon. Is that correct?




2 mm seems too big for a small 4...6" aperture when observing the Moon. It might be okay if you want to capture the whole disk, but it seems large if you're interested in small details.



Again, it's all relative.



I have a 6" reflector with a self-made primary mirror of excellent quality. I always take care to collimate the scope, and to keep it at thermal equilibrium. I live in California, so seeing is often good. It's not uncommon for me to use less than 1 mm exit pupil when observing the Moon. I use 0.84 mm as a starting point, sometimes pushing it down as low as 0.6 mm.



If I use an e.p. of 1.1 mm for the Moon that means seeing is pretty bad - but in those cases I'd rather do something else, instead of being frustrated by bad seeing. But at that e.p. in a wide eyepiece (82 degrees apparent field) you see the whole Moon at once, which is always nice when you do a public demo (sidewalk astronomy).



For much larger apertures, the ideal exit pupils tend to be somewhat larger even when observing the Moon.



To predict seeing, go on the Clear Sky Chart site, find a location near your home, and look at the third row in the chart (labeled Seeing); when it's dark blue, seeing is good.




For the largest practical e.p., don't worry about it. The Moon is so bright, losing light is not something you should pay any attention to at all.

Saturday, 27 April 2013

reference request - Truncated exact sequence of homotopy groups

This is a question about a name of a very useful lemma,
that permits one in particular to show that smooth birational complex projective
varieties have isomorphic fundamental groups.
If this lemma has no name, I would like at least to have a reference (if it exits).
The lemma can be seen as a
truncated version of the basic fact, that if we have a locally trivial
fibration (say of finite dimensional CW complexes) $Fto Eto B$ then
we get a long exact sequence



$to pi_i(F)to pi_i(E)to pi_i(B)to pi_{i-1}(F)to$



Lemma. Let $Eto B$ be a surjective map of finite dimensional $CW$ complexes,
such that every fiber is connected, simply connected and is a
deformation retract of a small neighbourhood.
Then $pi_1(E)=pi_1(B)$.



Question.
Do you know the name of such a lemma, or of some of its generalizations? Is there a reference for this?



The result about $pi_1$ of birationaly equivalent varieties follows
since any birational transformation can be decomposed in blow-ups
and blow downs along smooth submanifolds. And it is not hard
to check that the conditions of lemma are satisfied for such
elementary blow ups.

soft question - What programming languages do mathematicians use?

One language I still use is PostScript. I probably need to defend that.



  1. Its syntax is elegant. In fact, no language I've seen has more uniform syntax: a complete program is syntactically identical to almost any fragment of a program. There are no keywords and very few special cases.


  2. It can be a lot of fun, and you can make pretty pictures.


  3. It has very few data types, but some of the ones it does have are surprisingly useful. Dictionaries come to mind immediately. Also, "arrays" (which would be called "lists" in any other language) are extremely flexible. They automatically support comprehensions, not as a separate feature, but as an obvious consequence of the syntax. Functional programmers shouldn't be surprised that procedures are useful as a type; actually, due to the simplicity of the syntax and lack of keywords, any nontrivial program has to work with procedures as data.


Unfortunately, it has many drawbacks that prevent it from really being useful. Its handling of strings is abominable. Also, it has no facilities for user interaction. Its console I/O is crippled. Things like that could in principle be fixed by appropriate third-party packages, but unfortunately, to my knowledge, there are no third-party packages at all (at least for general programming). Finally, it can be very hard to debug; actually, it is more difficult to debug than any other language I know except assembly. All of those things combine to make it one of the most programmer-unfriendly languages out there. Nevertheless, some of my best work is implemented directly in PostScript, and I have done some real work in it. (Also, let's be fair: PostScript was never intended for general-purpose programming! Using a page-description language for any serious computation at all is some sort of achievement.)



(Language: PostScript. Mathematical interest: its syntax is simple enough to be interesting as a mathematical construction. It's easy to produce some mathematical illustrations, like many types of fractals.)



For real mathematical figures (such as for inclusion in papers), I use MetaPost. PostScript can be used for this purpose, but MetaPost is much better suited for this and is very TeX-friendly.



(Language: MetaPost. Mathematical interest: it's great for making mathematical figures suitable for inclusion in a LaTeX document.)



Another language that I use mostly for fun, not serious work, is x86 assembly language. In contrast to PostScript, it's an ugly language, but strangely, I think I use assembly for some of the same reasons that draw me to PostScript.



(Language: Assembly. Mathematical interest: its execution model is very simple, so expressing algorithms in it is an interesting challenge that mathematicians may enjoy.)



The rest of the languages I use need no introduction: C, C++, Python, Ruby, Java.



(Languages: C, C++, Python, Ruby, Java. Mathematical interest: none in particular, but they're useful in general programming, including mathematical programs.)



I used to use Octave, but apparently most of the world uses Matlab, and Octave has just enough incompatibilities with Matlab to make it annoying to try to use other people's code. Also, it seems to have pretty poor support for sparse matrix computations.



(Language: Octave. Mathematical interest: free approximate-clone of Matlab. It has simple syntax for matrix-centric computation.)



I used to use PHP a lot. Actually, PHP and assembly are sort of an odd couple. A while ago, for no good reason, I tried to come up with the fastest code to print out all the permutations of a string. My best solution (for strings of ~10 or more characters, IIRC) was a combination of PHP and x86 assembly. To be fair, the PHP part could have been done in another language, but PHP was almost the right tool for the job.



(Language: PHP. Mathematical interest: none in particular, but it's great for designing websites with server-side scripting, which is no less useful to mathematicians than it is to other programmers.)



I like Haskell, but I don't use it much.



(Language: Haskell. Mathematical interest: sigfpe said it.)



There are other languages I find interesting but never learned properly, like Lisp, Fortran, and Forth.



If anyone's looking for a recommendation, I don't recommend any of those. But learn all of them, and then go off into a dark corner of the universe and come back with the One Language that will rule us all.

Triple Stars v/s Ternary Stars

Exactly the difference between binary and double stars.



Trinary System



Three stars gravitationally bound to each other.



Here are some examples



Triple Star System



Three stars that appear to be together but any one of them is gravitationally unbound.



Here are some examples.

Friday, 26 April 2013

mg.metric geometry - When shorter means smaller?

I will break tradition of the accumulation of interesting comments and post a suggested partial solution (of something). The structure of mathoverflow is somehow not perfect for an incremental discussion (even though it is great in many ways).



The main idea so far for relevant contractive maps are folding the plane in half. So you could change the question and ask if any convex shape other than a circle has the property that it always fits inside of itself if you fold it once across a chord. [Edit 1: If you do ask, the answer is that one fold is not enough for a dented circle, as Martin points out in the comment.] A Rouleaux triangle does not have this property, but maybe it is interesting to check other regular Rouleaux polygons.



I think that no ellipse (other than a circle) has this property. You have to be a little careful because if you take an ellipse that is not round and not too thin, then if you fold it across its short semiaxis, the half-ellipse can fit inside of the original ellipse in non-standard ways.



If you fold the ellipse across its long semiaxis instead, then trivially the half ellipse only fits in the original ellipse in one way (up to symmetry). Suppose that you tilt this chord slightly, but keep it passing through the center, and then cut the ellipse in half. Then I think that this kind of half ellipse also fits in the original ellipse in one way. If that is correct, then if you fold the ellipse along this slightly tilted chord, then the folded shape does not fit in the original ellipse. [Edit 2: Anton says that it is not true, and that this half-ellipse which is cut at a slight diagonal can be movable within the original ellipse. I do not know whether it can be moved far enough, but I will refrain from speculating.]



A similar trick works for any regular odd-sided polygon $P$. $P$ has a longest diagonal. Make a chord which is close to this diagonal and parallel to it, so that the region on one side that has the majority of vertices has less than half of the area. This subregion only fits in $P$ in one way, so if you fold along this chord the folded shape does not fit. [Edit 2: At least this case of the argument actually works.]



I conjecture that if $K$ is any convex shape whose longest chords are isolated, then either by tilting or offsetting a longest chord, you can make a fold that does not fit in $K$. [Edit 2: A foolish conjecture as long as the ellipsoid case is in doubt.]



On the other hand, a constant-width body has the opposite property. It has an entire circle of longest chords, in a natural sense a maximal family of them.

mass - What are the masses of the two stars (given the information provided)?

We can assume that the stars are equal in mass, and their orbits are circular



The orbital speed is 80000 m/s and at an orbital period of 10 months (or $2.628times 10^7$s) the length of the orbit is $2.1024times 10^{12}$ m or 14.05 AU The radius of the orbit therefore is $14.05/tau$ = 2.237AU.



The version of Keplers law given is $$T^2 =frac{a^3}{m_1+m_2}$$



substituting $T^2 =(10/12)^2 = 0.6944$ (divide by 12 to convert to years) and $a^3= 11.19$ gives $$m_1+m_2 = frac{11.19}{0.6944} = 16.12 mathrm{solar mass}.$$



Since $m_1=m_2$, the mass of each star is 8.06 solar masses, or $1.6times 10^{31}$kg

ag.algebraic geometry - Preimage of distinguished open sets

If we have a morphism between two affine Schemes $f: X rightarrow Y$ with $X = $ Spec $A$, and $Y = $ Spec $B$, is it true that $f^{-1}(D(g)) = D(f'(g))$? (where $f'$ is the associated map on the structure sheaves) If so, is there a simple proof? Otherwise, is there any other way to characterize the preimages of distinguished open sets?

Thursday, 25 April 2013

ac.commutative algebra - Is there a slick proof of the classification of finitely generated abelian groups?

I'm not sure what ingredients you are allowing, but here is one proof sketch:



Let $A$ be our f.g. abelian group. Since $mathbb Z$ is Noetherian, the torsion subgroup
$A_{tors}$ is also f.g., and the quotient $A/A_{tors}$ is torsion free, and f.g.
(being a quotient of something f.g.). [As pointed out in a comment, we will later
show that $A_{tors}$ is a direct summand of $A$, and so the Noetherianess argument
is not actually needed.]



(1) If $A$ is f.g. and torsion free over $Z$, it is free.



Proof: Induction on the dimension of $V := {mathbb Q}otimes_{mathbb Z} A$
(which is fin. dimensional, since $A$ is f.g.).



If this equals $1$, then $A$ is a f.g. subgroup of $mathbb Q$, and finding a common
denominator shows that it is cyclic. (This is the Euclidean algorithm.)



In general, choose a line $L$ in $V$. If $A cap L = 0$, then $A$ embeds into
$V/L$, the dimension drops, and we are done by induction. (Of course, this actually
can't happen, but never mind; we don't need to prove that here.)



Otherwise, we have $0 rightarrow Acap L rightarrow A rightarrow B rightarrow 0,$
and $B$ embeds into $V/L$, so is free by induction, $A/Acap L$ is f.g. (by Noetherianess
of $mathbb Z$) and embeds into $L$, so is free by the dim. 1 case.
Freeness of $B$ makes this s.e.s split, so $A = Acap L oplus B$ is free.



(2) In general, $A = A_{tors} oplus text{something free} .$



Proof: We have the s.e.s $0 rightarrow A_{tors} rightarrow A rightarrow
A/A_{tors} rightarrow 0.$ Part (1) shows that $A/A_{tors}$ is free,
and then this freeness lets us split the s.e.s.



(3) Now suppose $A$ is torsion. Its Sylow subgroups are unique (by abelianess,
although there are many other ways to prove this too), and all have mutually
trivial intersections, to $A$ is isomorphic to their direct sum.



(4) We have now reduced to the case $A$ is a $p$-power order abelian group.
Let $p^e$ be the exponent of $A$, so $A$ is a ${mathbb Z}/p^e {mathbb Z}$-module.
Choose an element $a in A$ of order $p^e$. Then we have
${mathbb Z}/p^e {mathbb Z} hookrightarrow A,$ an embedding of ${mathbb Z}/p^e {mathbb Z}$-modules. Sincer ${mathbb Z}/p^e$ is injective over itself, this splits.
(There are many elementary ways to prove this, or to alter the argument: e.g.
apply Pontrjagin duality, which for a group of exponent $p^e$ is just Homs to
${mathbb Z}/p^e {mathbb Z},$ to get a surjection from a ${mathbb Z}/p^e {mathbb Z}$-module to ${mathbb Z}/p^e {mathbb Z}$, which must then split, the latter
being free of rank one; now apply Pontrjagin duality again to get a splitting of the original
sequence.)



Continuing by induction on the order, we write $A$ as a sum of cyclic groups of
$p$-power order.



(5) We have now shown that any f.g. $A$ is a direct sum of a free group and
of cyclic groups of prime power order. It is easy to rearrange this information
to get the classification in terms of elementary divisors.



Comment: while this may not seem so slick, I think it has the merit that the techniques it uses are elementary versions of standard commutative algebra arguments for analyzing modules over any commutative Noetherian ring, namely various localization and devissage techniques.



E.g. the preceding argument extends immediately to the PID case. In step (1), one uses the
PID property to find a common denominator, rather than the Euclidean algorithm.



In step (3), one observes that $A_{tors}$, being finitely generated and torsion,
is annihilated by some non-zero ideal $I$ in the PID $R$, hence is a module over the
Artinian ring $R/I$, and so is the sum of its localizations $A_{mathfrak p},$ where
$mathfrak p$ ranges over the finitely many (non-zero, hence maximal)
prime ideals containing $I$.



EDIT: If one wants to work more in the spirit of the classification by elementary divisors,
and avoid working one prime at a time, one can combine steps (3), (4), and (5) as follows:



(3') Suppose $A$ is f.g. torsion. Let $e$ be its exponent. Then it is a ${mathbb Z}/e{mathbb Z}$-module, and contains an element of order $e$. Thus one has an embedding
${mathbb Z}/e{mathbb Z} hookrightarrow A,$ which must split (either by the injectivity
argument of (3), applied now to ${mathbb Z}/e{mathbb Z}$, or the Pontrjagin duality
argument). Proceeding by induction, one writes $A = oplus {mathbb Z}/e_i{mathbb Z},$
where $e_i | e_{i-1},$ as required.



EDIT: Suppose that one wants to prove directly that ${mathbb Z}/e{mathbb Z}$ is injective
as a module over itself (as Martin asks below): using a standard criterion for injectivity of modules over a commutative ring, one need just show that for any ideal $I$ of ${mathbb Z}/e{mathbb Z}$,
any map $I hookrightarrow {mathbb Z}/e{mathbb Z}$ of
extends to a map ${mathbb Z}/e{mathbb Z} rightarrow {mathbb Z}/e{mathbb Z}$.



This is easily done: $I$ is of the form $f {mathbb Z}/e{mathbb Z}$, for some
$f | e$. Equivalently, $I = ({mathbb Z}/e{mathbb Z})[e/f]$ (the $e/f$-torsion
submodule). The given map $I rightarrow {mathbb Z}/e{mathbb Z}$ then necessarily lands in $({mathbb Z}/e{mathbb Z})[e/f] = I,$ and a map $I rightarrow I$ can certainly
be extended to a map ${mathbb Z}/e{mathbb Z} rightarrow {mathbb Z}/e{mathbb Z}$,
as required.

telescope - How do I find the RA of sunset and sunrise in a specific location?

There is a very nice web-based utility called Staralt that I frequently use before observing runs.



This has lots of observatories pre-programmed or you can put in your own latitude and longitude AND altitude (can you do that in Stellarium?).



To quote the website - "you can plot altitude against time for a particular night (Staralt), or plot the path of your objects across the sky for a particular night (Startrack), or plot how altitude changes over a year (Starobs), or get a table with the best observing date for each object (Starmult).".



You can also do handy thing like produce plots, charts and tables or show where the moon is with respect to your targets.

cv.complex variables - Does there exist a holomorphic function which takes given values on the positive integers?

Inspired of course by What's a natural candidate for an analytic function that interpolates the tower function?
I am minded to ask what looks to me like a more natural question: given a sequence $a_1,a_2,a_3,ldots$ of complex numbers, is there always a holomorphic function $f$ defined on the entire complex plane, with $f(n)=a_n$ for $n=1,2,3,ldots$? No idea what the answer is myself, but wouldn't surprise me if it were well-known and even easy.

gravity - Why doesn't the earth's surface collapse onto itself?

For the same (an more prosaic) reason milk skin floats over boiling milk - because it is lighter than the layer below. During the formation of the planet it went through a process called Iron Catastrophe, were a large parcel of the denser materials like iron and nickel sunk to the core, leaving the lighter materials close to the surface.



Nice graph with relative densities follow:



enter image description here



Fission contributes to keep our internal engine super-hot and well-oiled, allowing for our metallic core to spin and generate our magnetosphere. Eventually we'll run out of heat, the outer core will solidify and stop spinning.

Wednesday, 24 April 2013

gr.group theory - Automorphisms of category of groups


Possible Duplicate:
What are the auto-equivalences of the category of groups?




Does the category of groups have any nontrivial automorphisms? (an automorphism of a category being a functor from the category to itself with an inverse functor, where the composite both ways is the identity on objects and on hom-sets. By "nontrivial" I mean an automorphism that does not send every object to an isomorphic object).



Probably not, but I don't know how one would prove it. I think I have the beginning of a proof for the category of finite groups, as follows: first, it is clear that a group of prime order must go to a group of prime order, since these are the only groups that have exactly two subgroups (having two subgroups is invariant under category automorphisms). Next, we can use the number of embeddings of the group of order p in the group of order $p^2$ to show that a group of one prime order cannot go to a group of another prime order.



To continue the proof for finite groups or all groups, I suspect we need to use something about the (local) finite presentability of groups, but I am not sure how.



What about other categories, such as abelian groups, commutative unital rings, and modules over a fixed commutative unital ring? The only kinds of nontrivial automorphisms that I know of are the "opposite operation" functor for monoids, noncommutative rings, and similar noncommutative structures.

Monday, 22 April 2013

at.algebraic topology - Construction of the Stiefel-Whitney and Chern Classes

This is not an answer to your question. Rather, it is a "less mysterious" version of the Milnor-Stasheff construction of the Stiefel-Whitney classes, which doesn't refer explicitly to Steenrod operations. (I think I learned about this from my thesis advisor when I was a lad ... it was so long ago ...)



Let $Vto X$ be a real vector bundle. Let $S^infty=bigcup S^n$, the infinite dimension sphere. Taking product with $S^infty$ gives a vector bundle $Vtimes S^inftyto Xtimes S^infty$. I produce a vector bundle $V'to Xtimes RP^infty$ by dividing out by an action of the cyclic group of order $2$ on both base and total space:



  • on the base $Xtimes S^infty$, the involution is $(x,y)mapsto (x,-y)$;

  • on the total space $Vtimes S^infty$, the involution is $(v,y)mapsto (-v,-y)$.

The Euler class $e(V')$ of $V'$ is an element of degree $n$ in $H^*(Xtimes RP^infty; Z/2) = H^*(X;Z/2)[t]$. The following formula holds:
$$ e(V') = t^n + w_1(V)t^{n-1}+cdots + w_n(V).$$
So if you have an Euler class, then you can use this as the definition of the Stiefel-Whitney classes. The mod-2 Euler class is fairly easy to define from Milnor-Stasheff's point of view: $e(V')$ is the pullback along the $0$ section of the orientation class in the cohomology of the Thom space of $V'$.



It's easy to check the axioms for this guy. It's certainly natural, since $Vmapsto V'$ and $e$ are functorial. Whitney sum follows from $(Voplus W)'approx V'oplus W'$ and the Whitney sum formula for the Euler class. If $Rto *$ is the trivial bundle, then $R'to RP^infty$ is the canonical line, so $e(R')=t$ and so $w_0(R)=1$ and $w_1(R)=0$. You can use this to show that $w_0(V)in H^0X$ is equal to $1$ for any bundle over $X$, by pulling back $V$ over any point of $X$ (where it becomes trivial). If $Lto RP^infty$ is the canonical line, then $L'to RP^inftytimes RP^infty$ is $L_1otimes L_2$, the tensor product of the canonical line bundles over each factor. So $e(L')=s+t= 1cdot t^1 + scdot t^0$, giving $w_0(L)=1$ and $w_1(L)=s$ (where $sin H^1RP^infty$ is the generator).



Added later. I wrote the above while I was a bit feverish :). It didn't occur to me when describing it that it's a pretty standard way to construct characteristic classes; the variant which gives chern classes is probably more familiar.



I also said that it's a "version" of the Steenrod operation construction of SW classes, so let me try to explain that. I'll sketch a "direct" proof that the Steenrod operation definition of SW classes is equivalent to the one I gave above (i.e., without refering the axioms that M-S give for SW classes).



Steenrod operations come from an "extended square" construction on cohomology classes (see my answer in Why does one think to Steenrod squares and powers?). If $X$ is a space, let $DX=(Xtimes X times S^infty)/(Z/2)$, where I divide by the involution $(x_1,x_2,y)to (x_2,x_1,-y)$. The "extended square" is a function
$$P: H^n(X) to H^{2n}(DX).$$
Cohomology is with mod-2 coefficients. If you restict along the "diagonal" embedding $d: Xtimes RP^infty to DX$, you get Steenrod squares:
$$d^*(P(a)) = t^{n}Sq^0(a) + t^{n-1}Sq^1(a) + cdots + Sq^n(a).$$
There's a relative version of this: if $Vto X$ is a vector bundle, so is $DVto DX$; write $T(V)$ for the Thom space of $V$, and write $f: T(V) to T(DV)$ for the map induced by diagonal inclusion. If $uin H^nT(V)$ is the orientation class, then
$$f^*(P(u))= t^{n}Sq^0(u)+t^{n-1}Sq^1(u)+cdots +Sq^n(u).$$
According to Milnor-Stasheff, $Sq^i(u)=u,w_i(V)$.



The neat fact is that $P(u)in H^{2n}T(DV)$ has to be the orientation class $u'$ of $DVto DX$! So as long as I can describe the orientation class, I don't need to know about Steenrod opeartions! Thus, $f^*(u')in H^*TV[t]$ is the polynomial whose coefficients are the SW classes. To get the formula I gave originally, observe that $f^*(u')=u, e(V')$; this is because the pullback of the bundle $TVto TX$ along $d: Xto DX$ is the same as the bundle $V+V' to X$.



Why is $P(u)$ the orientation class of $DV$? The orientation class of a bundle in ordinary cohomology mod-2 is the unique element which restricts to the fundamental class of the sphere when you restrict to each fiber, so you just have to check that $P(u)$ has this property. And this is pretty easy (the operation $P$ is natural, and it's easy to understand how $P$ works when you have a discrete space, or a bundle over a discrete space.)

knot theory - What is the Alexander polynomial of a point?

According to the Baez-Dolan cobordism hypothesis, an extended TQFT is determined by its value on a single point. This value a fully dualizable object of a symmetric monoidal $n$ category (a fully dualizable object is a higher categorical analogue of a finite dimensional vector space). The Alexander polynomial is a quantum invariant, and comes from a TQFT.




How can an "Alexander polynomial" TQFT be put into an extended TQFT, and what is its value at a single point?


The question I just asked is closely related to this question. I also asked the question on the ldtopology blog here, and Theo Johnson-Freyd suggested that MO might be the place to ask it.



Briefly, I will summarize what an extended TQFT is. A TQFT as a symmetric monoidal functor Z:Cob(n)-> Vect(k) from the tensor category of $n-1$ dimensional manifolds and cobordisms between them to the tensor category of vector spaces over a field k. An extended TQFT is a symmetric monoidal functor Z:Cobk(n)->C from the n-category of cobordisms to a symmetric monoidal n-category C. I vote for the introduction to Lurie's expository account of his work proving the Baez-Dolan cobordism hypothesis as the best place to read about why extended TQFT's are natural objects, to understand their motivation, and to understand why people are so excited about them. An extended TQFT assigns a fully dualizable object to a point, and a higher “trace” on this object to a closed n-dimensional manifolds.

banach spaces - Approximation Property

It seems to be a folk result that l_infinity has the approximation property, even the bounded approximation property, and also, I think, even the so-called propery pi (approximation property) of Lindenstrauss.



This is alluded to in a few texts, but I cannot seem to find the proof, which is presumably obvious.



Does anyone have a reference or an easy solution?



Cheers,



R. Fry

Sunday, 21 April 2013

the moon - Which eyepieces I can use it for best viewing experience using my exisiting telescope?

Every scope has a minimum and maximum useful magnification (minimum on your scope is 18X and maximum is 152X - from specs on Telescope.com). Anything outside of those numbers will compromise your ability to see well through the scope. To figure out how best to make use of your scope within the restrictions of useful magnification, you need to start with how to calculate magnification from focal length.



The magnification is equal to the focal length of the telescope divided by the focal length of the eyepiece. For example, your 25mm eyepiece would allow you to get 28X magnification (700mm / 25mm = 28X). Using the same formula, your 10mm eyepiece would allow you to get 70X magnification. For an easy way to get close to 152X, you could purchase a simple 2X Barlow. (700mm / 10mm) * 2X = 140X. For now, start with your 25mm eyepiece and get used to the night sky through it. You can always step it up a bit with other eyepieces later.



So you want to see Jupiter? I think you should just barely be able to make out the dark bands in the atmosphere around 100X (so maybe a 6mm or 7mm eyepiece for your scope). You will definitely be able to see the Galilean Moons though. And speaking of moons, you should get a moon filter to cut down on the brightness when looking at our Moon. I've found it's best to observe the edge of the Moon where light turns to dark. The shadows created along the craters and mountains are magnificent to see. Enjoy!

science based - Could a theoretical cube shaped planet have a moon?

A definite yes for small low-orbit and high-orbit satellites as long as the cube does not rotate. Large moons far away should be fine



The question was answered in the 2012 Physics International paper: "The Gravity Field of a Cube". (Available for free) The authors found stable orbits for a tiny moon or space probe to circle the cube.
The gravity field of a cube is only distorted close to the cube and with a large moon you can presumably have a situation like the Pluto-Charon system where they each orbit a common center of mass, so cubeness shouldn't be a problem in that case. (Though I can't prove it :)



The authors also provide us with great background material for writing a sci-fi story, such as how a lake on a cube will look, and what to do if you colonize the world:



enter image description here



Paper citation:



Consider now a hypothetical cube 12,000×12,000×12,000 km3, approximately the size of the Earth, with the same volume of water and atmosphere as found on the Earth, then we would approximately half fill each face with water and have an atmosphere approximately 100 km thick similar to what is assumed for the atmosphere on the Earth before reaching space. In this case then the corners and the edges of the cube, would be like vast mountain ranges several thousand km high, with their tips extending out into free space. It would therefore be very difficult to cross these mountain ranges and hence we would have six nearly independent habitable zones on each face.



There would presumably be permanent snow on the sides of these vast mountain ranges and people would live around the edges of the oceans on each face in a fairly narrow habitable zone only about 100 km wide as the cube faces rise rapidly through the atmosphere.
Unfortunately climbing the approximately 3000 km high corners does not result in an improved view because the surface is still fiat in any observed direction. However the corners, being in free space, would be very suitable for launching satellites. One would also have approximately sqrt(2) x 6000 km of downhill ski run from each corner, down to the centre of each face.



In order to have a day night cycle we would also need the cube to be rotating. The sun would rise almost instantaneously over the face of a cube however, so that each face would need to be a single time zone and thus the cube as a whole would require four separate time zones, assuming the planet was rotating about the centre of an upper and lower face. The north and south faces in this case would be permanently frozen as they would receive no sunlight except that striking the oceans extending away from the surface of the cube, so there might be a permanent pool of liquid water at the two poles. Launching low orbit satellites around this cube needs special care in order to avoid certain orbital resonances that would create significant variations in the orbit.

Saturday, 20 April 2013

cosmology - Is the local group bound to the Virgo cluster?

It would seem so. The local group is part of the Virgo cluster and as such is considered to be gravitationally bound. Although the Virgo cluster and the local group are currently moving apart, the mass if the Virgo cluster will likely slow and reverse the recession over time, with the local group ultimately merging with the cluster.



References:
http://astronomy.swin.edu.au/cosmos/V/Virgo+Cluster
http://heasarc.nasa.gov/docs/cosmic/local_supercluster_info.html

the moon - Are lunar occultations visible to the naked eye?

The Moon occults a lot of stars. But are these events visible to the naked eye? Won't it be blinded out by the crescent even at its thinnest just before sunrise? Did ancient astronomers actually observe them, or did they deduce them theoretically according to orbital calculations? (I suppose occultation during an eclipse is both invisible and unlikely, right?)



Venus, the brightest planet, like the new Moon always stays near the horizon. But could one actually see even Venus disappear behind the lunar anti-crust? A popular medieval islamic symbol suggests to me that they were well aware of this phenomena. Both the Moon and Venus are still symbolically important in their culture. But was occultation directly visible or inferred?



Photo (non-naked eye) of the Moon occulting Aldebaran



Ancient symbol for a lunar occultation (please disregard whatever it is interpreted to stand for today)

Thursday, 18 April 2013

ag.algebraic geometry - Vector Bundles on the Moduli Stack of Elliptic Curves

As is well known, there is classification of line bundles on the moduli stack of elliptic curves over a nearly arbitrary base scheme in the paper The Picard group of $M_{1,1}$ by Fulton and Olsson: every line bundle is isomorphic to a tensor power of the line bundle of differentials $omega$ and $omega^{12}$ is trivial.



I am now interested in a similar classification scheme for higher dimensional vector bundles (i.e. etale-locally free quasi-coherent sheaves of finite rank). I am especially interested in the prime 3, so you may assume, 2 is inverted, or even that we work over $mathbb{Z}_{(3)}$. I found really very little in the literature on these questions. I know only of two strategies to approach the topic:



1) I think, I can prove that every vector bundle $E$ on the moduli stack over $mathbb{Z}$ localized at $p$ for $p>2$ is an extension of the form $0to L to E to F$ where $L$ is a line bundle and $F$ a vector bundle of one dimension smaller than $E$ (this may hold also for $p=2$, but I haven't checked). In a paper of Tilman Bauer (Computation of the homotopy of the spectrum tmf) Ext groups of the so called Weierstraß Hopf algebroid are computed, which should amount to a computation of the Ext groups of the line bundles on the moduli stack of elliptic curves if one inverts $Delta$. It follows than that every vector bundle on the moduli stack of elliptic curves over $mathbb{Z}_{(p)}$ is isomorphic to a sum of line bundles for $p>3$ if I have not made a mistake. But for $p=3$, there are many non-trivial Ext groups and I did not manage to see which of the occuring vector bundles are isomorphic.



2) One can try to find explicit examples of a non-trivial higher dimensional vector bundles. A candidate was suggested to me by M. Rapoport: for every elliptic curve $E$ over a base scheme $S$ we have an universal extension of $E$ by a vector bundle. Take the Lie algebra of this extension and we get a canonical vector bundle over $S$. As explained in the book Universal Extensions and One Dimensional Crystalline Cohomology by Mazur and Messing, this is isomorphic to the deRham cohomology of $E$. This vector bundle is an extension of $omega$ and $omega^{-1}$ and lies in a non-trivial Ext group. But I don't know how to show that this bundle is non-trivial.



I should add that I am more a topologist than an algebraic geometer and stand not really on firm ground in this topic. I would be thankful for any comment on the two strategies or anything else concerning a possible classification scheme.

hubble telescope - Is the Tombaugh Regio on Pluto visible from Earth?

Unfortunately making maps that have this level of detail from the ground is not possible with current telescopes. There are 2 problems; first, it takes a lot of magnification to resolve Pluto and even more to resolve surface features. Second, from the ground you need more than just magnification, you need EXTREMELY stable skies or that will wash out the surface features. Getting just one super sharp image is not enough, you need several to ensure that you have high enough signal to noise to believe the surface features are real.



This is why Marc Buie, who used the High Resolution Channel on the Advanced Camera for Surveys on HST to create the image you put in your question. Unfortunately, even that level of resolution is not possible anymore as HRC is no longer functional. These kind of maps can be very hard to make.



There have been other maps of pluto, both from the ground and using HST. Here is a compilation of several using a compilation of ground based photometry, the Fine Optical Camera on HST and the ACS HRC. As you can see from the ground based photometry you can make a map, but it is very poor resolution compared to what HST can do.



Other Pluto Maps



There have been other HST press releases about those other images





There is a good blog post about how they match up from the museum of applied arts and sciences at the Sydney Observatory. But the bottom line is that they match up pretty well, but not perfectly. This is expected as we expected volitle transport on the surface so the surface features will change. As you can see in the image below, that there were a lot of changes from the map made in 1994 to the one made in 2003. So one would expect changes going out to 2015.



New Horizons and Hubble Images



So, yes your intuition is valid, but Pluto's surface is dynamic so it will be changing on you!

computational complexity - BPP being equal to #P under Oracle

First, let's be slightly pedantic and not make statements like P = #P, which cannot possibly be true just because P is a set of decision problems and #P is not. To get a decision version of #P, one can use PP, or something like P#P.



About your question, BPPNP is contained in PPP and P#P by Toda's theorem. On the other hand, if P#P were contained in BPPNP, it would imply that PH is contained in BPPNP, which would collapse the polynomial hierarchy to the third (or second?) level, which is considered unlikely.



In conclusion, P#P is considered to be more powerful than NP, BPP, BPPNP and even NPNPNP.

Wednesday, 17 April 2013

ca.analysis and odes - When are some products of gamma functions algebraic numbers?

Taking cyclic subgroups seems promising for obtaining identities similar to the ones of Nijenhuis, and so I have looked at products of ${Gammaleft(dfrac{a^k }{N}right)}/{sqrt{2pi}}$ with $k$ running over a certain range. The factor $sqrt{2pi}$ is motivated by the fact that



  • in Nijenhuis' formula as well as in the multiplication formula and even in the reflection formula, a factor $sqrt{pi}=Gamma(frac12)$ occurs with the same multiplicity as the gamma factors (so my products can be considered in fact as "well-poised" gamma quotients),

  • without including $sqrt{2}$, those products, whenever they are integers, have a relatively big 2-valuation (see the factor $2^{b(A)}$ in Nijenhuis' formula $$prod_{xin A}{Gammaleft(frac{x }{2n}right)}=2^{b(A)} sqrt{pi}^{,nu(n)},$$ where $A$ is the subgroup of the multiplicative group $mathbb Z_{2n}^*$ generated by $n + 2$ or any one of its cosets, $nu(n)$ its order and $b(A)$ the number of elements in $A$ that are larger than $n$).

For brevity, define for coprime integers $a, N$ $$g(a,N):=prod_{k=1}^{ind_a(N)} frac{Gammaleft(dfrac{a^k pmod N}{N}right)}{sqrt{2pi}}.$$ Thus the product is taken over the subgroup $langle arangle$ of $mathbb Z_N^*$ generated by $a$, with all the arguments of the gamma factors between $0$ and $1$. For an integer $b$ coprime to $N$, define similarly the product over the corresponding coset $blangle arangle=langle arangle b$ as $$g_b(a,N):=prod_{k=1}^{ind_a(N)} frac{Gammaleft(dfrac{a^kb pmod N}{N}right)} {sqrt{2pi}}.$$



With this notation, the initial identity reads $g(9,14)=sqrt{2}$, and we further have the somewhat complementary one $g_3(9,14)=1/sqrt{2}$.




Once you know where to (ask a computer to) search, it is easy to find, in a few minutes, dozens of those products that appear numerically to be algebraic.




I have excluded the self-complementary groups, i.e. the groups for which $langle arangle=langle N-arangle $, because for those, we already know that $g(a,N)$ is algebraic by the reflection formula. Likewise I have excluded subgroups whose members form essentially an arithmetic sequence (more precisely, a set ${lambda s+t}cap mathbb Z_N^*$ for given $s,t$), as those can be handled by the multiplication formula and yield again algebraic products. Call the remaining subgroups and the associated gamma products non-trivial.



A systematic search (with reasonably high numerical precision) shows that of the non-trivial algebraic gamma products, most are of the form $q^{u/v}$ with integers $u,v$. Generally, $u/v$ is positive and $q$ a prime dividing $N$. But there are exceptions like $g(24,203)=7^{-1/2}$ or $g(103,420)=(3^3cdot5^3cdot7)^{1/6}$.



For $nle300$, there are $106$ non-trivial algebraic gamma products, $88$ of which can be written in form $q^{u/v}$. (With $v=1$ for 36 of them.) The $18$ remaining ones occur for $N= 60,105,120,140,156,180,220,231,255,285,300$ (note that all these $N$'s have at least three prime divisors) and have minimal polynomials of degrees $2,4,6$ or $8$. The latter holds for all $Nle 1000$.
Moreover so far all of them can be written with radicals, e.g. $g(103,105)=sqrt[3]{3(1701+166sqrt{105})}$ or $g(41,156)=2 sqrt[4]{13}(2sqrt{3}+sqrt{13})$ or $g(83,120)=sqrt{3(sqrt{3}-1)(sqrt{30}-5)}$.
A few of these identities, e.g. the one for $g(83,120)$, can be derived by using the standard formulas (thus in the way the OP asks next to the end). But I doubt this is possible where e.g. $sqrt{13}$ occurs.



You can get more similar identities by taking the cosets of the same groups. I haven't looked systematically at them, but conjecturally, if $g(a,N)$ is algebraic, so is $g_b(a,N)$. Note that generally, $g_b(a,N)/g(a,N)$ does not seem to be algebraic.



The distribution of the non trivial subgroups seems to be at least as irregular as the distribution of the primes and not even very much correlated with the structure of $mathbb Z_N^*$.

set theory - How much choice is needed to show that formally real fields can be ordered?

Background: a field is formally real if -1 is not a sum of squares of elements in that field. An ordering on a field is a linear ordering which is (in exactly the sense that you would guess if you haven't seen this before) compatible with the field operations.



It is immediate to see that a field which can be ordered is formally real. The converse is a famous result of Artin-Schreier. (For a graceful exposition, see Jacobson's Basic Algebra. For a not particularly graceful exposition which is freely available online, see http://math.uga.edu/~pete/realspectrum.pdf.)



The proof is neither long nor difficult, but it appeals to Zorn's Lemma. One suspects that the reliance on the Axiom of Choice is crucial, because a field which is formally real can have many different orderings (loc. cit. gives a brief introduction to the real spectrum of a field, the set of all orderings endowed with a certain topology making it a compact, totally disconnected topological space).



Can someone give a reference or an argument that AC is required in the technical sense (i.e., there are models of ZF in which it is false)? Does assuming that formally real fields can be ordered recover some weak version of AC, e.g. that any Boolean algebra has a prime ideal? (Or, what seems less likely to me, is it equivalent to AC?)

Tuesday, 16 April 2013

What is the acceleration of the stars' speed in the Universe? Positive or negative?

This is the classic misconception about the expansion of the Universe: there is no center of the Universe. The whole universe expands as one, not from a single point nowhere.



Knowing this it makes no sense speaking of the direction of an object due to the Universe. You could compare the directions of two objects in it though.



Therefore star's direction depend on where and how they where created and also the interaction with other stars and their home galaxy.

the sun - How far is the Earth/Sun above/below the galactic plane, and is it heading toward/away from it?

Humphreys & Larsen (1995) suggest, using star count information, a distance of $20.5 pm 3.5$ pc above the Galactic plane; consistent with, but more precise than the Bahcall paper referred to by Schleis.



Joshi (2007) is more guarded, investigating some systematic uncertainties in the estimation techniques and ends up with distances between 13 and 28 pc above the plane.
This paper gives an excellent review of the topic in its first couple of pages.



The Sun moves at about 15-20 km/s with respect to a local standard of rest defined by the general motion of stars in our vicinity around the Galaxy. In three-dimensions, this "peculiar velocity" is $U=10.00 pm 0.36$ km/s (radially inwards), $V=5.25 pm 0.62$ km/s (in the direction of Galactic rotation) and $W=7.17 pm 0.38$ km/s (up and out of the plane). (Dehnen & Binney 1998)



The Sun executes oscillations around its mean orbit in the Galaxy, periodically crossing the Galactic plane. I borrowed this illustration (not to scale!) from http://www.visioninconsciousness.org/Science_B08.htm to show this oscillatory motion.
As the Sun is currently above the plane and moving upwards, and each cycle takes about 70 million years with an amplitude of 100pc (Matese et al. 1995), it will be roughly 30 million years before we cross the plane again.



Sun's motion around the Galaxy

Monday, 15 April 2013

How can I calculate what star was above my head when I was born?

Almost everybody knows the concept of Zodiac. But there are others. And, AFAIK, even Zodiac classification is wrong for nowadays - since the classification of the people's Zodiac Sign was developed thousands years ago, when the stars were in a slightly different positions.



I don't really care about this stuff. But I want to know for sure, given the exact birth time and location, what star was over my head, if I looked straight up?



How can I calculate what star was above my head within a reasonable margin of error?



P.S. I don't mean any kind of software, that does all the job - I mean the algorithm, which includes the calculations (on the paper), and some related knowledge database about all known stars and their trajectories.

nt.number theory - An S-unit equation, with S an infinite and sparse set of primes.

Say we have three infinite sequences ${a_i},{b_i},{c_i}$ of natural numbers, satisfying the equations $$a_1+b_1=c_1,dots, a_n+b_n=c_n,dots $$
Assume further that $gcd(a_i,b_i,c_i)=1$ for each $i$ and that $(a_i,b_i,c_i)neq (a_j,b_j,c_j)$ for all $i,j$. Now let's define $S$ as the set of primes $p$ which divide $a_ib_ic_i$ for at least one $i$. From the S-unit theorem we know that $S$ has to be infinite.



Now the question is: Can $S$ be sparse? This can be taken to mean Dirichlet density zero, for example.



I haven't thought much about this but there are reasons to believe the answer is yes, indeed if there are infinitely many Mersenne primes $q=2^p-1$ then the equations $1+q=2^p$ give such a sparse $S$. However, I am looking for an unconditional result.

How is light created? - Physics


How is light created at the atomic level?




Atoms are composed of nuclei, agglomerates of protons and neutrons, which have electrons in orbitals around them; as many electrons as protons so that normally atoms are neutral.



What keeps atoms stable is that the orbitals exist at specific energy levels given by the solution of the potential problem in quantum mechanics , of a charge in the potential well of another charge, for example the hydrogen atom. At the micro level, the charges do not orbit around their center of mass system but are described as probability distributions that tell us the probability to be found in a specific (x,y,z) space point.



The carrier of the electromagnetic interaction is the photon, an elementary particle which obeys the quantum mechanical form of the Maxwell equations. It has spin 1 and zero mass and it carries the electromagnetic energy as h*nu where h is the Planck constant and nu the frequency of the wave that can be built up from an ensemble of such photons.



The electrons of the atoms are usually at the lowest orbital. Above the filled energy levels there exist orbitals that the potential predicts but are unfilled. A disturbance of the electron in its orbital, by thermal interactions of atoms for example, can kick an electron to such a higher orbital When it falls back in the ground state it was before the disturbance, electromagnetic energy is radiated, in the form of a single frequency photon that has the energy of the difference between these two orbital energy levels.



This is the black body radiation that mass of atoms at a certain temperature emit, called black body radiation when in an ensemble, as from the radiation of the sun.



An ensemble of photons, zillions of them, build up the classical electromagnetic wave which statement can be shown mathematically, but is not a simple formula. If you are interested read up this blog entry by Motl.

Sunday, 14 April 2013

Is the group von Neumann algebra construction functorial?

Well, for $sin G$ let $lambda(s)$ be the left-translation operator by $s$; all such operators are in the group von Neumann algebra. I guess that the hoped for homomorphism $F:NG rightarrow NH$ should satisfy $F(lambda(s)) = lambda(f(s))$ for $sin G$, and we should have that $F$ is an (ultraweakly) continuous $*$-homomorphism. In particular, $F$ is contractive.



Then $F$ need not exist. Let $G=mathbb Z$ and $H=mathbb Z/nmathbb Z$. Then $NH = CH$, so we have a trace on $NH$ which sends $lambda(0)$ to $1$. So if we apply $F$, and then take the trace, we should get an ultraweakly continuous functional $phi$ on $NG$ which satisfies $phi(lambda(ns)) = 1$ for all $sinmathbb Z$.



But this can't happen: maybe we can see this via the Fourier transform. Then $NG cong L^infty(mathbb T)$ and $phi$ induces $hin L^1(mathbb T)$ which satisfies $int h(theta) e^{instheta} dtheta = 1$ for all $sinmathbb Z$. This violated Reimann-Lebesgue.



On the other hand, if $G subseteq H$ is an inclusion (of discrete groups, to avoid topology) then we do get an inclusion $NG rightarrow NH$. Here's a construction. Find an index set $I$ and $(h_i)$ in $H$ such that $H$ is the disjoint union of ${Gh_i}$. Then define $V:l^2(H)rightarrow l^2(G)otimes l^2(I)$ by $V(delta_h) = delta_gotimesdelta_i$ if $h=gh_i$ (so defined on point-masses, and extend by linearity). So $V$ is unitary, and $theta:xmapsto V^*(xotimes 1)V$ is a normal $*$-homomorphism $NGrightarrow B(l^2(H))$. Then, for $rin G$, $V^*(lambda(r)otimes 1)V(delta_h) = V^*(delta_{rg}otimesdelta_i) = delta_{rh}$ as $rgin G$. So $theta$ maps into $NH$, and does what we want.



Surely there is some general result, but I'm not sure of it...

soft question - The problem of infinity

This question does not have a mathematical answer.



However, different approaches to formalizing the concept of infinity have been considered in philosophy of mathematics for a very long time (since Aristotle, at least), and have resulted in quite a bit of interesting mathematics proper. Most people here are likely familiar with the modern Cantorian concept of infinity (which is what undergirds modern set theory), and so I will describe a somewhat different mathematical conception infinity.



So, one way of interpreting constructive mathematics is by means of "realizability interpretations". Here, we take the view that a proposition is true when it is possible to give evidence for its truth, and then inductively for each proposition we give conditions for evidence:



  • $top$ has the string $()$ for its justifying evidence

  • $A land B$ is justified when we can give a string $(p, q)$, where $p$ is evidence of $A$ and $q$ is evidence of $B$

  • $bot$ has no evidence

  • $A vee B$ is justified when we can give a string $(i, p)$, where $i$ is either 0 or 1. If $i$ is 0, then $p$ is evidence of $A$ and if $i$ is 1 it is evidence of $B$

  • $A implies B$ is justified by a computer program $c$, if $c$ computes evidence for $B$ as an output whenever it is given evidence for $A$ as an input.

  • $forall x. A(x)$ is justified by a computer program $c$ for evidence, if $c$ computes evidence for $A(n)$ as an output whenever it is given a numeral $n$ as an input.

Now, note that in the cases of implication and quantification, we ask for a computer program which is total on its inputs. So, the question arises, how can we tell whether or not a given program is a legitimate realizer for a proposition or not? The Halting Theorem ensures that we cannot accept arbitrary programs and decide after-the-fact whether or not they are evidence. So we must circumscribe what we will accept to some class of total functions.



So we can decide that certain patterns of recursive program definition (for example, primitive recursion) are acceptable forms of looping, which we believe will always lead to halting programs. These patterns correspond to induction principles. Proof-theoretically stronger theories correspond to logics whose realizability interpretation allows more generous recursion schemes as realizers. In this setup, large infinite sets correspond to very strong induction principles. So this gives a way of understanding Cantorian infinities without having to posit the actual existence of sets with huge numbers of elements.



This should illustrate that how we read mathematical statements shapes what ontological commitments we make regarding them, and so you can't answer physical questions only through pure mathematics. That is to say: physicists can't get out of doing experiments. But funny readings of mathematics may help you interpret those experiments!

Friday, 12 April 2013

nt.number theory - Smallest integer not divisible by integers in a finite set

Hello all, if $a_1,a_2, ldots a_t$ are $t$ integers $geq 2$, the set
$G(a_1,a_2, ldots a_t)=lbrace N geq 1 |$ In any sequence of $N$ consecutive
integers there is at least one not divisible by any of $a_1,a_2, ldots a_trbrace$
is nonempty (it contains $a_1a_2 ldots a_t$) so it has a minimal element
which we denote by $g(a_1,a_2, ldots a_t)$.



Question 1 : Is there a uniform bound $gamma (t)$, depending
only on $t$, such that $gamma (t) geq g(a_1,a_2, ldots a_t)$ for any
$a_1,a_2, ldots a_t$ ? For example, we may take $gamma(2)=4$.



Question 2 : If $gamma$ is well-defined,
are any asymptotics known about $gamma(t)$ ?

Thursday, 11 April 2013

ct.category theory - Is the ultraproduct concept fundamentally category-theoretic?

The notion of an ultrapower and more generally reduced powers and their generalizations are essentially category theoretic. More specifically, reduced powers are essentially pro-sets. This answer is a part of my own research, but these results are not ready to publish yet. Although these results relate the ultrapower construction to categories, I do not see how these results could generalize to possibly relate ultraproducts to categories.



In the other answers to this question, people have explained how ultraproducts are direct limits. It turns out that reduced powers are directed limits as well. Furthermore, I will show that as categories, the category of generalized reduced powers is equivalent to the category of inverse systems of sets. By the category of generalized reduced powers, I mean the category of most model-theoretic generalizations of the ultrapower and reduced power construction such as extenders, iterated ultrapowers, limit ultrapowers 3, Boolean ultrapowers 4, and their reduced power analogues such as limit reduced powers.



If $mathcal{C}$ is a category, then the category $mathbf{Pro}(mathcal{C})$ is essentially the category of inverse systems over the category $mathcal{C}$ and it should be thought of as the category of inverse limits from the category $mathcal{C}$. I claim that the category $mathbf{Pro}(mathbf{Set})$ of pro-sets is equivalent to a full subcategory of pro-filters. Furthermore, these pro-filters are the things that we want to construct reduced powers. In fact, we obtain a three way duality between the category of pro-sets, the full subcategory of pro-filters where the transitional mappings are epimorphisms, and categories of reduced powers of structures.



$largemathbf{Categories}$
In this section, I will first define the categories and I will state the equivalences between these categories.



Let $A$ be a fixed infinite set. Let $Omega(A)$ be the algebra with universe $A$ and where every operation is a fundamental operation. Then every object in $V(Omega(A))$ is isomorphic to a limit reduced power of $Omega(A)$ and any elementary extension of $Omega(A)$ is isomorphic to a limit ultrapower of $Omega(A)$ (see 3 for information on limit ultrapowers). Also, one may represent algebras in $V(Omega(A))$ as limit reduced powers in such a way so that the homomorphisms between algebras in $V(Omega(A))$ are induced by mappings between the underlying sets of the limit reduced powers. Since every algebra in $V(Omega(A))$ is isomorphic to a limit reduced power of of $Omega(A)$, one should think of the algebras in $V(Omega(A))$ as limit reduced powers.



A small nonempty category $D$ is said to be a cofiltrant category if whenever $e_{1},e_{2}in D$, then there is some $din D$ and morphisms $ell_{1}:drightarrow e_{1},ell_{2}:drightarrow e_{2}$, and if $ell_{1},ell_{2}:drightarrow e$ are morphisms, then there is some object $c$ and a morphism $m:crightarrow d$ with $ell_{1}m=ell_{2}m$. Every downward directed set is a cofiltrant category, and the notion of a cofiltrant category is the categorization of the notion of a downward directed set.



If $mathcal{C}$ is a category, then $mathbf{Pro}(mathcal{C})$ is the category of all functors $F:mathcal{D}rightarrowmathcal{C}$ for some cofiltrant category $mathcal{D}$. One should think of the category $mathbf{Pro}(mathcal{C})$ as the category of all inverse limits in $mathcal{C}$ since the notion of a cofiltrant category is the categorization of the notion of a downward directed set.



If $F:mathcal{D}rightarrowmathcal{C},G:mathcal{E}rightarrowmathcal{C}$ are objects in $mathbf{Pro}(mathcal{C})$, then define the set of morphisms by
$$mathrm{Hom}(F,G)=varprojlim_{einmathcal{E}} varinjlim_{dinmathcal{D}} mathrm{Hom}(F(d),G(e)).$$
The transitional mappings in the direct and inverse limits are the canonical ones, and the definition of composition of morphisms is defined in the natural way.



The following categories are the object of study in the papers 1 and 2.



We shall now construct a category $mathfrak{F}$. The objects in $mathfrak{F}$ are pairs
$(X,mathcal{F})$ where $X$ is a set and $mathcal{F}$ is a filter on $X$. If $(X,mathcal{F}),(Y,mathcal{G})inmathfrak{F}$, then function $f:Xrightarrow Y$ is a morphism from $(X,mathcal{F}),(Y,mathcal{G})$ if $f^{-1}[R]inmathcal{F}$ whenever $Rinmathcal{G}$. It is easy to show that $f$ is a morphism from $(X,mathcal{F})$ to $(Y,mathcal{G})$ if and only if whenever $Rsubseteq X$ is non-negligible (i.e. $R^{c}notinmathcal{F}$), then the image $f[R]$ is non-negligible. The intuition behind defining the category $mathfrak{F}$ this way is that we do not want to map non-negligible sets to negligible sets since that is like a function mapping a non-empty set to an empty set.



Let $mathfrak{G}$ be the quotient category of $mathfrak{F}$ where we relate two morphisms if they differ by a negligible set. In other words, the objects in $mathfrak{G}$ and $mathfrak{F}$ are the same. If $(X,mathcal{F}),(Y,mathcal{G})$ are objects in $mathfrak{F}$, and $f,ginmathrm{Hom}_{mathfrak{F}}[(X,mathcal{F}),(Y,mathcal{G})]$, then $fsimeq g$ iff ${xin X|f(x)=g(x)}inmathcal{F}$. Then $mathrm{Hom}_{mathcal{G}}[(X,mathcal{F}),(Y,mathcal{G})]=mathrm{Hom}_{mathfrak{F}}[(X,mathcal{F}),(Y,mathcal{G})]/simeq$. The composition in $mathfrak{G}$ is defined in the obvious manner.



While the categories $mathfrak{F}$ and $mathfrak{G}$ were both studied in 1 and 2, the category $mathfrak{G}$ is more fundamental than $mathfrak{F}$ and the category $mathfrak{G}$ deserves to be called the category of filters while $mathfrak{F}$ does not seem to be very useful.



Morphisms between algebras in $mathfrak{G}$ induce homomorphisms between reduced powers. If $mathcal{A}$ is an algebraic structure and $[f]:(X,mathcal{F})rightarrow(X,mathcal{G})$ is a morphism in $mathfrak{G}$, then we define a morphism $[f]^{bullet}:mathcal{A}^{X}/mathcal{G}rightarrowmathcal{A}^{Y}/mathcal{F}$ by letting $[f]^{bullet}[ell]=[ellcirc f]$.




$mathbf{Proposition}$ Let $[f]in mathrm{Hom}_{mathfrak{G}}[(X,mathcal{F}),(Y,mathcal{G})]$. The
following are equivalent.



  1. If $[f]$ is an epimorphism in $mathfrak{G}$.


  2. $mathcal{G}={Ssubseteq Y|f^{-1}[S]inmathcal{F}}$


  3. If $Rinmathcal{F}$, then $f[R]inmathcal{G}$.


  4. The canonical mapping $[f]^{bullet}:A^{Y}/mathcal{G}rightarrow A^{X}/mathcal{F}$ is injective for each set $A$.




Let $mathbf{PF}$ be the full subcategory of $mathbf{Pro}(mathfrak{G})$ whose objects consist of inverse systems $((X_{d},mathcal{F}_{d})_{din D},(ell_{d_{1},d_{2}})_{d_{1}leq d_{2}})$ over directed sets $D$ such that each $ell_{d_{1},d_{2}}$ is an epimorphism in $mathfrak{G}$.



$mathbf{Humor}:$ I call the objects in $mathbf{PF}$ pro-filters. That is, professional filters. The pro-filters play football in the National Filter League (NFL).



If $kappa$ is a cardinal, then let $mathbf{PF}_{kappa}$ denote the full subcategory of $mathbf{PF}$ consisting of inverse systems $(X_{d},mathcal{F}_{d})_{din D}$ such that $|X_{d}|<kappa$ for each $din D$. Let $mathbf{Set}_{kappa}$ denote the category of sets of cardinality less than $kappa$. Let $mathbf{U}_{kappa}$ be the subcategory of $mathbf{PF}_{kappa}$ consisting of all inverse systems $(X_{d},mathcal{F}_{d})_{din D}$ where each $mathcal{F}_{d}$ is an ultrafilter.




$mathbf{Theorem}$ Let $A$ be an infinite set.



  1. The category $mathbf{PF}$ is covariantly equivalent to the category $mathbf{Pro}(mathbf{Set})$.


  2. The functor defined by $(X_{d},mathcal{F}_{d})_{din D}mapsto Omega(A)^{X_{d}}/mathcal{F}_{d}$ is a contravariant equivalence
    between the category $mathbf{PF}_{|A|^{+}}$ and the category
    $V(Omega(A))$. Furthermore, this functor restricts to an equivalence
    between the category $mathbf{Pro}(mathbf{U}_{|A|^{+}})$ and the
    elementary extensions of $Omega(A)$.


  3. The category $mathbf{Pro}(mathbf{Set}_{|A|^{+}})$ is contravariantly equivalent to the category $V(Omega(A))$.




The equivalences between the category $mathbf{PF}$ and $mathbf{Pro}(mathbf{Set})$ can be described explicitly. If $((X_{d},mathcal{F}_{d})_{din D},(ell_{d_{1},d_{2}})_{d_{1}leq d_{2}})inmathbf{PF}$, then add a transitional mapping $f:X_{d}rightarrow X_{e}$ whenever $fintextrm{Hom}_{mathfrak{F}}((X_{d},mathcal{F}_{d}),(X_{e},mathcal{F}_{e}))$ and $[f]=ell_{d,e}$.



If $F:mathcal{D}rightarrowmathbf{Set}$ is a pro-set, then for each $dinmathcal{D}$, then ${mathrm{Im}(F(f))|f:erightarrow d,textrm{for some},einmathcal{D}}$ is a filterbase that generates a filter on $F(d)$ which we shall denote by $mathcal{F}_{d}$.
Partial order $mathcal{D}$ where $dleq e$ iff there is a morphism from $d$ to $e$. Then if $dleq e$, then let $ell_{d,e}:(X_{d},mathcal{F}_{d})rightarrow(X_{e},mathcal{G}_{e})$ be the morphism where
$ell_{d,e}=[F(f)]$ for some $f:drightarrow e$. It is easy to show that the morphism $ell_{d,e}$ does not depend on $[F(f)]$ and $((X_{d},mathcal{F}_{d}),(ell_{d,e})_{dleq e})inmathbf{PF}$.




$mathbf{Corollary}$ If $|A|<|B|$, then the category $V(Omega(A))$ is
equivalent to a full subcategory of $V(Omega(B))$.



$mathbf{Corollary}$ If $(X_{d},mathcal{F}_{d})_{din
D},(Y_{e},mathcal{G}_{e})_{ein E}inmathbf{PF}_{|A|^{+}}$, and the algebras
$^{lim}_{longrightarrow}Omega(A)^{X_{d}}/mathcal{F}_{d}$ and
$^{lim}_{longrightarrow}Omega(A)^{Y_{e}}/mathcal{G}_{e}$ are
isomorphic, then
$^{lim}_{longrightarrow}mathcal{A}^{X_{d}}/mathcal{F}_{d}$ and
$^{lim}_{longrightarrow}mathcal{A}^{Y_{e}}/mathcal{G}_{e}$ are
isomorphic for each structure $mathcal{A}$.




The above result still holds when one replaces the direct limit of reduced powers with other reduced power and ultrapower constructions.



$largetextbf{An Application}$



We shall now give an application that shows that going between algebras and different categories may be useful. The following result can be proved using the duality between pro-filters and algebras (the proof also uses a version of the Feferman-Vaught theorem, and the Keisler-Shelah isomorphism theorem or Frayne's theorem).




$mathbf{Theorem}$ Let $mathbf{2}$ denote the two element Boolean
algebra. Let
$mathcal{A}mapstomathbf{R}(mathcal{A}),mathcal{A}mapstomathbf{S}(mathcal{A})$
be two distinct reduced power constructions(such as limit reduced
powers, Boolean powers etc.). If $mathbf{R}(mathbf{2})$ and
$mathbf{S}(mathbf{2})$ are elementarily equivalent, then there is a
sequence of ultrafilters $mathcal{U}_{n}$ such that for every
structure $mathcal{A}$, we have
$$^{lim}_{nrightarrowinfty}(mathbf{R}(mathcal{A}))^{mathcal{U}_{0}cdotsmathcal{U}_{n}}simeq^{lim}_{nrightarrowinfty}(mathbf{S}(mathcal{A}))^{mathcal{U}_{0}cdotsmathcal{U}_{n}}.$$




We take note that in the above result we cannot replace the direct limit of ultrapowers with a single ultrapower. For example, if $mathcal{U}$ is a non-principal ultrafilter, and
$mathbf{R}(mathcal{A})=mathcal{A}$ and $mathbf{S}(mathcal{A})=mathcal{A}^{mathcal{U}}$ for all structures $mathcal{A}$, then for each non-principal ultrafilter $mathcal{V}$, we have $mathcal{V}<_{RK}mathcal{U}cdotmathcal{V}$. In particular, the ultrafilters
$mathcal{V}$ and $mathcal{U}cdotmathcal{V}$ are not Rudin-Kiesler equivalent, so there is some structure $mathcal{A}$ where the structures $mathcal{A}^{mathcal{V}}$ and $mathcal{A}^{mathcal{U}cdotmathcal{V}}simeq(mathcal{A}^{mathcal{U}})^{mathcal{V}}$ are not isomorphic.



I must also mention that the elementary classes of Boolean algebras have a particularly nice and simple classification. There is a countable set of Boolean algebra invariants called the elementary invariants, and two Boolean algebras are elementarily equivalent if and only if they satisfy the same elementary invariants. In particular, to check that $mathbf{R}(2)$ and $mathbf{S}(2)$ are elementarily equivalent, it suffices to show that these Boolean algebras have the same elementary invariants. The reader is referred to [5] or [6][Vol. 1] for more information on these elementary invariants.



Oh the joy of cats!!!



1 Blass, Andreas
Two closed categories of filters.
Fund. Math. 94 (1977), no. 2, 129–143.



2 Koubek, Vaclav; Reiterman, Jan
On the category of filters.
Comment. Math. Univ. Carolinae 11 1970 19–29



3 Keisler, H. Jerome. Limit ultrapowers.
Trans. Amer. Math. Soc. 107 1963 382–408



4 Mansfield, Richard. The theory of Boolean ultrapowers.
Ann. Math. Logic 2 (1970/71), no. 3, 297–323.



[5] Chang, Chen Chung, and H. Jerome Keisler. Model Theory. Amsterdam: North-Holland Pub., 1973.



[6] Monk, J. Donald, Robert Bonnet, and Sabine Koppelberg. Handbook of Boolean Algebras. Amsterdam: North-Holland, 1989.

Wednesday, 10 April 2013

cosmology - Do the E and B modes of the CMB polarization have anything to do with electric and magnetic fields?

In this context, the $kappa$ you are referring to is called the dimensionless matter density field. It is gravitational lensing jargon, and is usually just referred to as the 'convergence' field.



What you have written there (that a field can be split up into divergence-less and curl-less components) is generally speaking true for any field which can be expressed as the curl of another field $A$. @chris is correct in that they refer not to the electromagnetism you are probably used to hearing these types of things discussed in, but refer to the polarization states of the cosmic microwave background due to quantum fluctuations in the early universe. They are called 'E' and 'B' only because they are reminiscent of electric and magnetic field lines.



The big find was the measurement of B-modes. As a reminder, here is what these E- and B-mode patterns in the CMB would look like:
enter image description here



The big thing here is that E-modes are symmetric upon reflection, whereas B-modes are clearly not ($B>0$ maps into $B<0$). In order to produce B-mode polarization, you need two things:



1) Thompson scattering of photons by free electrons (present in the surface of last scattering as well as reionization).



2) A temperature quadrupole.



The exciting thing about the second item is that there are only a few things which can produce just such a quadrupole moment, and one of them is gravity waves! Gravitational waves physically warp the coordinates of space-time as it travels through space, which of course have existing particles, causing these types of distortions:



gravmode1gravmode2



This is the source of the temperature quadrupole moments. Sides which are squeezed are hotter and sides which are stretched are cooler.



Gravitational lensing can also produce small amounts of B-mode like features in the cmb polarization, but I believe they occur on much smaller scales and can therefore be ruled out as the cause of the BICEP2 findings. Lensing tends to stretch things out perpendicular to the radial vector (in the $hat{theta}$ direction):
weaklensing



The reason why this matters is because lensing provides a mechanism for turning E-mode polarization (take the $E<0$ mode for example) into a B-mode type polarization. Most plots of the BICEP2 results show the effect of lensing at much higher multipole moment (i.e. - smaller physical scales), so I believe they have a good way of differentiating between the two types of secondary cmb anisotropies.



bicep2

Tuesday, 9 April 2013

dg.differential geometry - Kahler differentials and Ordinary Differentials

There is a discussion of this issue at the $n$-category cafe. I'd encourage people who were interested in this question to head over there and see if they can lend some insight.




Here is a sketch of a proof that $d (e^x) neq e^x dx$ in the Kahler differentials of $C^{infty}(mathbb{R})$. More generally, we should be able to show that, if $f$ and $g$ are $C^{infty}$ functions with no polynomial relation between them, then $df$ and $dg$ are algebraically independent, but I haven't thought through every detail.



Choose any sequence of points $x_1$, $x_2$, in $mathbb{R}$, tending to $infty$. Inside the ring $prod_{i=0}^{infty} mathbb{R}$, let $X$ and $e^X$ be the sequences $(x_i)$ and $(e^{x_i})$. Choose a nonprincipal ultrafilter on the $x_i$ and let $K$ be the corresponding quotient of $prod_{i=0}^{infty} mathbb{R}$.



$K$ is a field. Within $K$, the elements $X$ and $e^X$ do not obey any polynomial relation with real coefficients. (Because, for any nonzero polynomial $f$, $f(x,e^x)$ only has finitely many zeroes.) Choose a transcendence basis, ${ z_a }$, for $K$ over $mathbb{R}$ and let $L$ be the field $mathbb{R}(z_a)$.



Any function ${ z_a } to L$ extends to a unique derivation $L to L$, trivial on $mathbb{R}$. In particular, we can find $D:L to L$ so that $D(X)=0$ and $D(e^X) =1$. Since $K/L$ is algebraic and characteristic zero, $D$ extends to a unique derivation $K to K$. Taking the composition $C^{infty} to K to K$, we have a derivation $C^{infty}(mathbb{R}) to K$ with $D(X)=0$ and $D(e^X)=1$. By the universal property of the Kahler differentials, this derivation factors through the Kahler differentials. So there is a quotient of the Kahler differentials where $dx$ becomes $0$, and $d(e^x)$ does not, so $dx$ does not divide $d(e^x)$.



I'm traveling and can't provide references for most of the facts I am using aobut derivations of fields, but I think this is all in the appendix to Eisenbud's Commutative Algebra.

Reps of groups and reps of algebras

Let us consider for simplicity the complex semi-simple case. As you mention, every irreducible representation $rho$ of a semi-simple finite dimensional Lie algebra $g$ integrates to a representation of the corresponding simply connected Lie group $G$. But even if we suppose $kerrho=0$, the representation of $G$ may well have a kernel, a finite central subgroup $H$ of $G$. So we obtain representations of $G/H'$ where $H'$ is a finite central subgroup contained in $H$, but not of other groups with Lie algebra $g$.



So one could ask: given a Lie group $G'$ with Lie algebra $g$, when does a representation $rho$ of $g$ with highest weight $Lambda$ integrates to a representation of $G'$? The answer: if and only if $Lambda$ belongs to the character lattice of a maximal torus $T$ of $G$. (Recall that the character lattice of $T$ is the discrete additive subgroup of the dual of the Lie algebra of $T$ spanned by the differentials of the homomorphisms $Ttomathbf{C}^{ast}$.)



One can also ask a similar question: given a representation $rho$ of $g$ with $kerrho =0$ and highest weight $Lambda$, how to compute the kernel of the resulting representation $R$ of the simply connected group $G$? The answer is as follows. The dual of the Lie algebra $t$ of $T$ contains the weight lattice $P$ and the root lattice $Q$. Recall that the weight lattice is the lattice spanned by the fundamental weights and the root lattice is spanned by the roots i.e. the weights of the adjoint representation. We have $Qsubset P$. Consider the dual lattices $Q^veesubset P^vee$; these are formed by all elements $a$ of the Lie algebra of $T$ such that $l(a)inmathbf{Z}$ for all $lin P$, resp. all $lin Q$.



Inside $P^vee$ there is a sublattice formed by all $ain t$ such that $Lambda(a)inmathbf{Z}$; it contains $Q^vee$ and the quotient by $Q^vee$ is naturally identified with the kernel of $R$ (by exponentiating). There is a similar result when $rho$ is reducible -- in that case we just consider the sublattice formed by all elements of the tangent space such that all weights that take integral values on them.



Example: when $G=SL_2(mathbf{C})$, we can identify $Q$ with the sublattice of $mathbf{R}$ spanned by 1 and $R$ will be spanned by $frac{1}{2}$. Then the representations with integral weight will have kernel $mathbf{Z}/2$ and the representations with half-integral weight will have trivial kernel.



Hopefully one of these two questions also covers the questions of the posting.

ra.rings and algebras - Universal functors according to Cohn.

I have the 1965 edition of Cohn's book, which apparently differs from the one you cite, since what you refer to as Proposition 1.1 is Theorem 1.2 in my copy. Nevertheless, I hope the following comments may be useful.



First, about the mysterious "morphisms" between objects that live in different categories, Cohn writes (near the top of page 110 in my copy) "we generally refer to the elements of $F(A,a)$ as the admissible morphisms of the representation. I believe this explains the anomaly that bothered you.



Let me also mention that this overloading of the word "morphism" is not as silly as it might at first appear. There is a natural way to "glue together" two categories $mathcal L$ and $mathcal K$ using a functor $F$ as in your question. Start with the disjoint union of $mathcal L$ and $mathcal K$, and then add Cohn's admissible morphisms as actual morphisms from an object of $mathcal L$ to one of $mathcal K$. Of course, having added these morphisms, you need to define compositions involving them. But $F$ tells you how to do that: If $rho:Ato a$ is one of the new morphisms, you want to pre-compose it with morphisms $lambda:Bto A$ of $mathcal L$ and to post-compose it with morphisms $kappa:ato b$ of $mathcal K$. For the former, act on $rho$ by $F(lambda,1_a)$; for the latter, use $F(1_A,kappa)$. The functoriality of $F$ ensures that this definition of composition for the new morphisms plus the pre-existing composition laws of $mathcal L$ and $mathcal K$ satisfy the associative law.



I conjecture that this construction is the "join" mentioned in Herman Stel's answer, but I haven't checked "Higher Topos Theory" to make sure. I do, however, assure you that this construction was the mental picture of $F$ that came naturally to my mind when I started reading your question.

Reference for Learning Global Class Field Theory Using the Original Analytic Proofs?

The nicest source that I know is Hasse's Zahlbericht (originally published as Bericht über neuere Untersuchungen und Probleme aus der Theorie der algebraischen Zahlkörper in 1930 and republished in 1965), the first part of which gives the ideal-theoretic approach with Takagi's proofs, and the second part of which gives Artin's reciprocity law along with many charming explicit reciprocity laws. The original papers of Takagi (available in his collected works) are also readable. Herbrand wrote a nice summary of the state of the art in 1935 (Le développement moderne de la théorie des corps algébriques; corps de classes et lois de reciprocite (Mem. Sci. Math. 75) Paris: Gauthier-Villars. 72 p. (1935)), which is available in most university libraries.



Lang in his Algebraic Number Theory (originally based on lectures of E. Artin), in fact, follows the original proofs fairly closely, although he uses ideles and the Herbrand quotient to streamline the proofs. One can get a sense for the sort of simplification that ideles afford by reading Franz Lemmermeyer's article on the development of genus theory (in The Shaping of Arithmetic after C.F. Gauss's Disquisitiones Arithmeticae Springer (2007) or at Lemmermeyer's own web page). In that article Lemmermeyer discusses the history of the famous first and second inequalities from their original appearance (with algebraic proofs that, in a sense, reappeared in Chevalley's treatment of classfield theory) in Gauss's Disquisitiones through their 19th-century Dirichlet-style analytic treatment and to the form used in the classfield theory of the 1920's. Since Lemmermeyer (following the historical sources) works with full rings of integers (without any idelic techniques), there are many tricky local terms in the formulas. Perhaps that is what you're hoping to see!

Monday, 8 April 2013

ca.analysis and odes - Is there a "Bezout's theorem" for analytic curves?

I am sure that the theorem you want is true, but I am missing one technical reference.



Claimed Theorem: Let $f(x,y)$ and $g(x,y)$ be entire functions on $mathbb{C}^2$. Then either ${ f=0 }$ and ${ g=0 }$ have a one dimensional overlap, or ${ f=g=0 }$ is discrete.



"Proof": If ${ f=g=0 }$ is not discrete, then it has an accumulation point $z$. Suppose ${ f=0 }$ is smooth at $z$. Then, by the implicit function theorem, we can locally parameterize ${ f=0 }$ as ${ (a(t), b(t)): t in D }$ where $D$ is a small disc and $a$ and $b$ are alanlytic functions on $D$, with $(a(0), b(0))=z$. Then the function $g(a(t), b(t))$ has infinitely many zeroes with an accumulation point at $t=0$, so it must be identically zero. Thus, $g$ vanishes on an open set in ${f=0 }$.` QED



Now, the gap in the above is that $z$ might not be a smooth point.



What I am quite certain is true, but I don't know a reference for, is that we have resolution of singularities for analytic germs the same way we do for polynomials. So, for any nonzero analytic function $f(x,y)$ and any point $z$, there is a neighborhood of $z$ where we can factor $f$ as $f_1 f_2 cdots f_r$, each an analytic function, and such that ${ f_i=0 }$, near $z$, can be paratemerized ${ (a_i(t), b_i(t)): t in D }$. One can probably prove this by brute force, but I'm sure one of our readers knows a reference.

jupiter - How is the diameter of a gas giant calculated?

I won't argue with the wikipedia definition (although the NASA Jupiter fact sheet lists it as the radius at 1 bar), but just to point out that the scale height of the atmosphere of Jupiter is given by, $h sim kT/m g$, where $T$ is the temperature, $m$ is the mean molecular mass and $g$ is local gravity. Putting in some numbers: $ g simeq 24.8 m/s^2$, $m simeq 2.2 m_u$, $T simeq 165K$, gives $h simeq 25 km$. Thus the pressure changes extremely rapidly compared with the actual radius of a gas giant,and is very small compared to the difference for instance between the polar and equatorial radius for a gas giant (like Jupiter) with significant rotation.



So, whether you give a planet's radius at 1 bar or 10 bar isn't going to make a lot of difference $<100 km$.



However your comment about whether the definition is analogous to the photosphere of a star does raise an interesting question. The effective opacity radius of a gas giant can vary with the wavelength of light. It is this property that can be used in conjunction with transiting exoplanetary systems to learn something about their atmospheres. For instance, it requires less physical depth of a gas to block wavelengths corresponding to the strong 589nm sodium doublet and thus the exoplanet is opaque at larger radii at this wavelength and thus has a (slightly) deeper transit. This provides the opportunity for transit transmission spectroscopy. e.g. http://adsabs.harvard.edu/abs/2012MNRAS.426.1663S



Another example is that at UV and X-ray wavelengths there have been observations suggesting that hot Jupiters have very extended (and opaque to high energy photons) atmospheres and may be being photoevaporated. e.g. http://adsabs.harvard.edu/abs/2013ApJ...773...62P



EDIT: So this raises an important distinction. A definition based on the radius at a particular pressure is mostly used by theorists and solar system scientists. However, we still don't know what the atmospheric structures of exoplanets are and can't measure pressures. The radii quoted for exolanets are more akin to how the radius of a star is measured - based on its opacity at the wavelengths being observed.

geology - Radioactive decay is the main source of the earth's interior heat: fact or theory?

There are really only six ways you could heat the interior of a planet. Gravitational collapse and compression, fission, fusion, crystallization, a very large impactor, and tidal flexing.



Fusion is out because Earth would be a rock-star. (Been waiting to use that one.)



Of the remaining possibilities, nuclear fission explains the majority of Earth's current temperature gradient.



http://en.wikipedia.org/wiki/Geothermal_gradient

ca.analysis and odes - Space of rapidly decreasing functions

It's (still) not completely clear what space you are working in since you haven't clarified the confusions from Yemon Choi and fedja's comments. But it may help to note an analogous problem that is well-defined and where there is a clear strategy. That strategy may adapt to your situation (whatever that is).



This analogous situation is of periodic functions using the usual basis of periodic exponentials, $e^{2pi i n x}$. The key here is:




$g$ is $C^infty$ if and only if the Fourier coefficients $c_n(g) = int_0^1 g(x) e^{-2pi i n x} d x$ are rapidly decreasing.




Then for $g$ in $L^2$, $c_n(g)$ is square summable and so $c_n(g) e^{-n t}$ for $t > 0$ is rapidly decreasing (that is, $lim_{n to infty} n^k c_n(g) e^{-n t} = 0$ for each $k$) since exponentials beat polynomials.



So I would try to find out a characterisation of the Schwartz space of rapidly decreasing functions in terms of the coefficients of their expansions with respect to the Hermite polynomials (suitably weighted). I would expect such a characterisation to be well-known, but I don't know it off the top of my head.

tag removed - Interpolation by a function whose second derivative is bounded

It isn't sufficient. Suppose that the elements of $X$ are
$$x_1 < x_2 < cdots < x_n,$$
and suppose for simplicity that $epsilon = 1$. Then the values $f(x_k)$ carry the same information as the integrals
$$langle p, f'' rangle = int_{x_1}^{x_n} p(x) f''(x) dx,$$
where $p(x)$ is a continuous, piecewise-linear function with $p(x_1) = p(x_n) = 0$. You can convert between such an integral and linear combinations of values of $f(x)$ by integrating by parts twice. For any such test function $p$, the inequality
$$|langle p, f'' rangle| le int_{x_1}^{x_n} |p(x)| dx qquad qquad$$
holds. Moreover, many of these inequalities are logically independent. More precisely, the extremal choices of $p(x)$ are those for which $p(x_k)$ alternate in sign for $i le k le j$, and $p(x_k) = 0$ when $k < i$ or $k > j$. The idea to show that such as $p$ is extremal is to make a dual choice of $f''$ that switches between $1$ and $-1$ when $p$ crosses the $x$ axis. (I apologize for skipping some of the logic in the extremality calculation, but I think that this is correct.) The resulting set of values for $f(vec{x})$ is not a polytope, but a certain convex region with both flat and curved boundary.



For example, consider your statistic,
$$f_k = frac{(x_{k+1}-x_k)f(x_{k+1})-(x_{k+1}-x_{k-1})f(x_k)+(x_k-x_{k+1})f(x_{k-1})}{(x_{k+1}-x_k)(x_{k+1}-x_{k-1})(x_k-x_{k-1})},$$
for consecutive triples of points, and suppose that $n=4$. Then the pair $(f_2,f_3)$ must lie inside the square $[-frac12,frac12]^2$; you identified two other pairs of inequalities, but they do not clip the square further. What actually happens is that two of the corners of the square are rounded by parabolas. The supporting lines of the parabolas come from the test function $p$ that is a piecewise-linear interpolation of $(0,-1,a,0)$ with $a > 0$.

Sunday, 7 April 2013

earth - Can the next possible ice age be projected based upon the Milankovitch cycle?

This paper seems to be discussing what you've asked: http://www.sciencemag.org/content/297/5585/1287.full. The next ice age is expected to begin about 50,000 years from now. However, I think it's still debatable how accurate the predictions of Milankovitch cycles are, and how much of an effect humans are likely to have on this earth system.

Saturday, 6 April 2013

ac.commutative algebra - Can a non-surjective polynomial map from an infinite field to itself miss only finitely many points?

You mentioned that for finite fields, such f exist. Under the following assumption, I believe there is an infinite field with the same such that an f exists. (My field theory is very weak, so I'm not sure how obviously correct or obviously incorrect these assumptions are):



There is a sequence of finite fields $F_q$ with appropriate polynomials $f_q$ such that



i) $q_{1}< q_{2}$ implies $F_{q_{1}}$ has fewer elements than $F_{q_{2}}$ (Edited notation a bit here)



ii) deg$(f_{q}) leq ninmathbb{N}$ (uniformly bounded)



iii) the number of points missed by $f_q$ is uniformly bounded above by $minmathbb{N}$



Under these assumptions, construct an infinite field as follows:



Let F be the set of usual field axioms (expressed in first order logic).



Let $psi$ be the first order statement "there are coefficients $a_{0}$ through $a_{n}$ and there are other points $y_{1}$ through $y_{m}$ such that for any $x$ we have $a_{n} x^{n} + ... + a_{1}x + a_{0} neq y_{k}$ for any $k$ and for all $w$ which are not equal to $y_{1}$ through $y_{m}$ there is a $x$ such that $a_{n}x^{n} + ... + a_{0} = w$"



More colloquially, $psi$ says "the polynomial $f(x) = a_{n}x^{n} + ... + a_{0}$ misses $y_1$ through $y_m$ but nothing else"



(One can, e.g., set $a_{n} = 0$ or $y_{1} = y_{2}$ if for a given finite field, the degree is smaller or $f_q$ misses fewer points)



Let $phi_k$ be the first order statement "There are at least $k$ elements" (i.e., there exist $x_{1}$ through $x_{k}$ such that they are pairwise nonequal).



Finally, set $T = F cup {psi} cup {phi_{n}}$.



A model of $T$ is simply a set with interpretations for everything such that all the statements of $T$ are satisfied. In other words, a model is a field (because is satisfies F) which is infinite (because it simultaneously satisfies all of the $phi_n$) which has a polynomial like you want (because of $psi$).



Godel's completeness theorem says that $T$ has a model iff $T$ is consistent. The compactness theorem for first order logic says that $T$ is consistent iff every finite subset of $T$ is consistent.



Hence, by applying Godel's completeness theorem again, we need only show that every finite subset of $T$ has a model.



Choosing a finite $T_{0}subseteq T$, we may, without loss of generality, enlarge it by including $F$ and $psi$ because a model of $T_{0}cup Fcup psi$ will model $T_{0}$.



Now, since $T_{0}$ is finite, there is a largest $N$ such that $phi_{N}$ is in $T_{0}$. Because of this, a model of $T_{0}$ is simply a finite field of at cardinality at least $N$ with a choice of function $f$ satisfying what you want (with bound on deg(f) and the number of points missed in the image). But the existence of such a field was precisely the assumption made at the top of the post.



Now, hopefully someone can shed some light as to whether or not the assumption is true.