Saturday, 31 May 2014

arithmetic geometry - p-divisible groups of superspecial abelian varieties

Let $p$ be a prime and $F$ be an algebraic closure of the field with p elements. I will consider abelian varieties over F up to prime-to-$p$ isogeny. Principal polarizations will be $Q$-homogeneous principal polarizations in this category.



Let $A$ be a superspecial abelian variety over F, of dimension $g>1$ and endowed with a principal polarization $L$.
Let $f$ : $A(p)rightarrow A(p)$ be an automorphism of the p-divisible group of $A$, preserving the polarization form of $A(p)$ (that I still call $L$) up to a unit of $Z_{p}$. It is known (I think) that one can always find an abelian variety $A'$ over $F$, a principal polarization $L'$ of $A'$, and a quasi-isogeny of principally polarized abelian varieties $f_{ab}$ : $(A,L)rightarrow(A',L')$ such that the $p$-divisible group of $(A',L')$ is $(A(p),L)$, and such that $f_{ab}$ induces the original map $f$ on $p$-divisible groups. (If someone has a reference for this, it would be helpful).



I have two questions:



1) Is is true in general that $f_{ab}$ is an isomorphism in the category of abelian varieties up to prime-to-$p$ isogeny? It seems to me that $f_{ab}$ need not to be an iso, even though A' and A are isomorphic.



2) Is it necessary true that the pair $(A,L)$ is isomorphic to the pair $(A',L')$ in the category of homogeneously principally polarized abelian varieties up to prime-to-$p$ isogeny? I think the answer is still no.



Thanks

galaxy - Milky way: How do we know its appearance?

Milky Way is a young galaxy, filled with neutral hydrogen along its arms.
This hydrogen can undergo state transition, a rare event, which is compensated by the huge amount of hydrogen itself, i.e., even if this transition is rare it is still strong because a large amount of atoms can exhibit it.
The basics of this transition is explained by quantum physics through state-change in the energy levels of the hydrogen atoms, and can be summarized in the following picture:



enter image description here



The electron in the atom transits from a level with parallel spin (with respect to the nuclear proton), to the lower state, where the electron spin is antiparallel.
When this happens, the atom emits a photon, known as the 21 cm radiation, or hydrogen line.
This line falls into the radio domain, which can be observed by radio telescopes.



Since this radiation comes from rotating arms, doppler shift of the line can trace the distance of the emitter region.



And voilà, you have your map.



EDIT: ops, I missed the previous links where the answer was already given.

terminology - Name of area close to Local Bubble?

I wonder if the area I am looking for is GSH 238+00+09.



In the color image quoted in my question, the galactic core lies beyond the top center. This would be 0° galactic longitude. Going counterclockwise, 238° is somewhere around Beteigeuze.



Here is a "[p]rojection of the Local Cavity onto the Galactic plane":



Figure 9 from Lallement et al. (2003)



Figure 9 from Lallement et al. (2003)



But if you try to superimpose this image on the other one (I matched the positions of the Sun, the Ophiuchus clouds and the Pleiades bubble, the Lupus cloud is a bit off), it seems as if the area marked in green was actually part of our Local Bubble! And GSH 238+00+09 then appears to be identical to the Orion-Eridanus-Superbubble:



Superimposed images



I'm sure I didn't match this correctly. Especially Loop III would be in totally the wrong place. But I found an article that used the same black and white image to illustrate possible locations of Loop III, and the second alternative to the one shown here was in the upper right hand corner, where it would then overlap with "Loop II" of the color image. But probably we should not worry too much over Loops II and III. The creator of the 3D Galaxy Map writes: "I haven't added Loop II, III or IV because I couldn't find any accurate information about their size and location." I'm sure he has been looking more thoroughly than I have, so I feel the mismatch of Loop III is not a problem.



But then the color image is from 1994, and understanding and cartography of our Galactic neighborhood has advanced quite a bit in the ten years between the two publications, so maybe the placement of the Local Bubble in the color map is only a rough equivalent and the mismatch between the two images is not my fault but a result of scientific progress.



Here is another image (rotated 180° in relation to the color image from my question) of local loops, bubbles, and superbubbles, where it becomes clear that GSH 238+00+09 (termed "New Superbubble" (NSB) in this image) is not identical to the Orion-Eridanus-Superbubble:



Figure 8 from Heiles et al., (1998)



Figure 8 from Heiles et al., (1998)



If you compare the scales of this last and my first color image, you will note that the color image falls completely inside the circle denoting the Local Bubble, supporting the thought that the green area lies within it (and not within GSH 238+00+09 or the Orion-Eridanus-Bubble). But this second image also supports the insight that the color image might not be that accurate, and that maybe the area whose name I am looking for does not exist in that shape at all.



In the following image (which shows Loop I as the Scorpius-Centraurus Shells and the Orion-Eridanus-Loop as the Orion Shell) the red blot immediately rimward (lower right) of the sun is the Taurus Dark Cloud. This is not from a scientific but a popular publication (just as my color image), so we should not expect it to be totally accurate, but it still suports the notion that the green area might be part of the Local Bubble.



Unnumbered 3rd figure from Frisch (2000)



Unnumbered 3rd figure from Frisch (2000)



Here is a final image, apparently a different rendering of the image from Lallement et al. (2003), the first image given in this answer. In this image the location of some important stars is marked. Most notably ε CM, which is the scientific name for the Arab name Adhara, with which the same star is marked in the color image from my question and there lies far out in the green area, **lies still within the Local Bubble in this image, close to the edge.



Figure 3 from Sfreir et al. (1999)



Figure 3 from Sfreir et al. (1999)



Here I overlayed the previous image and the image from my question. I matched the sun an Adhara. Other objects are still a bit off, but not much, and we get a good idea of where the Local Bubble ends.



Second overlay image





This thought is supported in the abstract of Heiles (1998), where it says: "[GSH 238+00+09] is at least partially responsible for the apparent extreme elongation of the Local Bubble toward l ~ 230° ..." (my emphasis).



I'll leave my first thoughts and the badly matched image in this answer, to document how I came to the final conclusion.



I'd welcome any thoughts and corrections of this attempt at an answer.




Sources:



Friday, 30 May 2014

Minimally 2-vertex-connected graphs?

A class of "minimally 2-vertex-connected graphs" - that is, 2-vertex-connected graphs which have the property that removing any one vertex (and all incident edges) renders the graph no longer 2-connected - have come up in my research.



Dirac wrote a paper on "minimally 2-connected graphs" (G. A. Dirac, Minimally 2-connected graphs, J. Reine Angew. Math. 228 (1967),. 204-216), which gives quite a detailed description of the structure of such graphs. However, in his sense, minimal 2-connectivity means that deleting any EDGE leaves a graph which is not 2-connected, which is not an equivalent property to the vertex-deletion one. Does anyone know anything about graphs with the latter property?



In the hope of stimulating some discussion, here is a wildly speculative and vague conjecture: The only graphs satisfying this property are simple cycles, and certain cycles with chords.

observation - When do Mercury/Venus reach greatest elevation at sunset/twilight for a given location?

On what day does Mercury reach its greatest elevation (in degrees from
the horizon) at sunset a given location?



The obvious answer is the day of Mercury's greatest elongation from
the Sun, but, since the ecliptic is slanted with respect to the
horizon, I'm not convinced this is correct.



In other words, on the day after greatest elongation, Mercury's total
angular distance from the Sun will be smaller, but it's vertical
distance in elevation (for a given location) at sunset might be
higher.



Same question for Venus, and for when the sun is 6 degrees below the
horizon (ie, civil twilight), and for sunrise/dawn.



I'm guessing the date might vary based on position (mostly latitude) since the ecliptic's slant varies at different locations.



I googled and found nothing. My (preliminary and possibly wrong)
expierments with stellarium show that Mercury's elevation at sunset IS
higher 1-3 days after its greatest elongation, but by less than 1/2
degree.



So, it's possible that the date of greatest elongation is a close
enough approximation.

Is CMB Polarization simply the temperature gradient of the CMB?

Polarization of an electromagnetic wave refers to the extent to which the electric field (which is always perpendicular to the direction of wave motion) oscillates in one particular direction, rather than randomly in the plane perpendicular to the wave motion.



Complete linear polarization means the electric field oscillates to and fro along one particular axis.



Polarization of the CMB is produced by Thomson scattering from free electrons just before they combine with protons and the Universe becomes transparent to the CMB radiation.
The polarization occurs because the radiation field seen by the electrons is not uniform in space (not isotropic), it varies according to which direction the radiation is coming from. These anisotropies are the fingerprints of what went on in the early Universe and what the values of the cosmological parameters are.



There is an excellent primer on this topic - http://background.uchicago.edu/~whu/intermediate/polar.html



The CMB is a blackbody spectrum at a temperature of 2.7K. Differences in this temperature are seen on a variety of scales at levels of 30-60 microKelvin ($mu K$). These are the "ripples" first seen by COBE, then WMAP and Planck with increasing precision and sensitivity to smaller angular scales.



Whereas expressing these spatial variations as differences in temperature seems straightforward, it is not so obvious why you would express polarization signals in the same terms.



In brief, what is going on here is that when a polarization signal is measured it is actually finding the difference between two perpendicular polarization intensities. Because these intensities are measured as changes of temperature in the bolometer arrays used as detectors, this is how the results of the polarization are expressed.



The polarization anisotropies are expected to be an order of magnitude smaller than the CMB temperature variations. That's because repeated scattering from free electrons can dilute the signal, so it is only light that has been scattered once that contributes.

Thursday, 29 May 2014

spectra - Invisible in visible light but visible in non-visible light?

While individual atoms or molecules can emit energy at discrete or narrow ranges of wavelengths, once you have larger chunks of matter (and they are above absolute zero) they will emit, at least, a continuous black body spectrum. While black body radiation can be observed at all wavelengths, far away from the peak wavelength it may be below our ability to detect its emission.

Wednesday, 28 May 2014

ag.algebraic geometry - Indexing the line bundles over a Grassmannian.

If your question is about complex line bundles up to an algebraic isomorphism, such bundles are classified by [the group of divisors](http://en.wikipedia.org/wiki/Divisor_(algebraic_geometry) for any algebraic variety. There is also only one definition of the group of divisors for smooth varieties, as in your case.



It's not hard to see that group of divisors of $mathbb P^n $ is generated by a hyperplane $H$. The key observation is that a variety defined by a homogenous polynomial of degree $n$ is rationally equivalent to $n$ copies of $H$ (because you can deform the coefficients of polynomial so that it becomes just $x_1^n = 0$).



Therefore the complex line bundles on any $mathbb P^n$ are classified by integer numbers: $mathcal O, mathcal O(1), mathcal O(2)$ etc.




Update: see also the answer by Evgeny who explains the same things as me, but better.




Grassmannians have a cell decomposition into Schubert_cells where each cell is just an affine space $mathbb A^l$. This particularly simple structure means that divisors up to rational equivalence will be the same as algebraic cycles up to rational equivalence and the same as topological $H^2$. Moreover, it will be generated by cells in complex codimension one.



Now it's an exercise to find out from the definition of Schubert cells that there's only 1 cell with this property. To see it, fix a flag $(F_i), F_iin V$. Then the cell is given by the conditions
$$ text{dim}\, Vcap F_{n-k-1+i} = i quad text{for} 0le ile k$$ for subspaces $V$ of dimension $k$.



Therefore the bundles on $Gr(n, k)$ are also numbered by integers.

universe - Why would it matter to us if space is infinite versus being finite?

More specificlly, what are some of the differences for contemporary sciences and humanity if there are multiple universes versus just one?



I ask because I'm writing a story and I want to put into perspective some of the implications such a difference would make on the sciences and on humanity. How would such a paradigm shift change the way reality is perceived? What would some of the implications be for the average person?



For example, obviously the way the big bang is interpreted would be vastly different. For example, if there are multiple universes, then maybe the background radiation so often cited as primary evidence for the big bang would be reinterpreted or, for example, maybe the big bang wouldn't be perceived as the starting point of "everything" anymore but just a gigantic explosion of sorts that has to happen every so often in certain pockets of space-time.



Maybe it would also change other theories of gravity that are otherwise thought to be very well understood, for example. Maybe it would change our understanding of time, I don't know. Like I said, I'm writing a story wherein I'd like to be able to put into perspective some of the "real-world" effects any such "series of discoveries" would have if they led to the strengthening of a multiverse hypothesis, such that it became an accepted theory. I'm going to need for my reader to appreciate why we would want to know such, aside from merely knowing it.



Thank you.

Tuesday, 27 May 2014

at.algebraic topology - motivation of surgery

This question has already been answered, but there's a tiny piece of intuition which I'd like to make explicit:



If you're thinking about a manifold in the PL world, surgery might look a bit arbitrary- why cut out and glue in those pieces and not others? Surgery's natural setting is the smooth world, where you're equipping a manifold with a Morse function $fcolon, Mto mathbb{R}$, and using information about critical points of $f$ to encode $M$.



It's actually a bit more involved than you might think it might be, but when you pass a critical point of $f$ you add a handle to $M$, and the boundary changes by surgery. For details, see answers to this question.



So really, surgery isn't an a-priori construction which somebody pulled from a hat- it is rather an operation which stems naturally and inevitably from Morse theory.

What is a White Hole?

Well according to the article "Have we seen a white hole?" (Byrd, 2011) there are some claims that scientists (specifically "The Revival of White Holes as Small Bangs" (Retter and Heller)) may have detected a white hole; however, this claim seems to be shrouded in controversy, as this observation may actually have a more stellar explanation.



However, the article from Retter and Heller offer a hypothesis as to what white holes are, from the article linked before:




We thus suggest that the emergence of a white hole, which we name a ‘Small Bang’, is spontaneous – all the matter is ejected at a single pulse. Unlike black holes, white holes cannot be continuously observed rather their effect can only be detected around the event itself.




Which, if the authors are correct, would explain why we don't see/detect them and white holes (unlike black holes) are unique because they supposedly spew matter, rather than extract.



One major caveat is that it is not clear as to where this matter that spews out comes from.

Monday, 26 May 2014

co.combinatorics - Is Soergel's proof of Kazhdan-Lusztig positivity for Weyl groups independent of other proofs?

Perhaps I can supplement Jim's answer a little.



In the paper "Kazhdan-Lusztig-Polynome und unzerlegbare Bimoduln uber Polynomringen" Soergel shows that there are certain graded indecomposable bimodules over a polynomial ring (now known as Soergel bimodules) which categorify the Hecke algebra. (Note that these are not projective!!)



That is, the indecomposable objects are classified (up to shifts and isomorphism) by the Weyl group, and one has an isomorphism between the Hecke algebra and the split Grothendieck group of the category of Soergel bimodules.



As a consequence, one obtains a basis for the Hecke algebra which is positive in the standard basis (as follows from the construction of the isomorphism of the Grothendieck group with the Hecke algebra) and has positive structure constants (because it is a categorification).



Soergel conjectures that this basis is in fact the Kazhdan-Lusztig basis, which would imply positivity in general.



Up until now there are only two cases when one can verify Soergel's conjecture:



  • when one has some sort of geometry (in which case one can show that the indecomposable Soergel bimodules are the equivariant intersection cohomology of Schubert varieties, and then use old results of Kazhdan and Lusztig). This shows Soergel's conjecture for Coxeter groups associated to Kac-Moody groups (in particular finite and affine Weyl groups).

  • when the combinatorics is very simple (i.e. for dihedral groups (Soergel) or universal Coxeter groups (Fiebig, Libedinsky)).

Hence, up until now there are no examples where Soergel's conjecture has yielded positivity when it was not known by other means. Also note that in the vast majority of cases, the proof using Soergel bimodules is strictly more complicated than the geometry proof, as one needs an extra step to get from geometry to Soergel bimodules.



Soergel's conjecture would however have more far reaching consequences than a proof that Kazhdan-Lusztig polynomials have positive coefficients. For example it provides a natural "geometry" for arbitrary Coxeter groups. For example, generalising some sort of Soergel bimodules to complex reflection groups would yield a natural setting for the study of "spetses" (unipotent characters associated to complex reflection groups).



One should also note that Dyer has developed a very similar conjectural world associating commutative algebra categories to Coxeter groups. He instead considers modules over the dual nil Hecke ring (which is the analogue of the cohomology of a flag variety), and has many nice results and conjectures. (Much of his work considers more general orders than the Bruhat order, and so will probably come in handy soon ...!)



While I am at it I should mention Peter Fiebig's theory of Braden-MacPherson sheaves on moment graphs. This is (in a sense made precise in one of Peter's papers) a local version of Soergel bimodules, and hence many questions become more natural on the moment graph.



Finally, one should mention the recent work of Elias-Khovanov and Libedinsky, which give generators and relations for the monoidal category of Soergel bimodules for certain Coxeter groups. (Elias-Khovanov in type A and Libedinsky in right-angled type.) These are very interesting results, but it is unclear to what extent they can be used to attack Soergel's conjecture.

Sunday, 25 May 2014

gn.general topology - Euler characteristics and operator indices as exponents for Laurent polynomials

This question is rather vague. Are there any natural situations which involve Laurent polynomials of the form
$$sum q^{a_i}inmathbb{Z}[q,q^{-1}]$$
where the $a_i$'s are either Euler characteristics of some spaces (possibly all subspaces of one fixed space), or more generally, indices of some elliptic operators? I've stumbled across such a beast, but am unsure how to interpret it. I was thinking at first that it was an element of $K_{S^1} (pt)$ or something, but in that case the exponents are telling us about which $S^1$ representations show up in the appropriate bundles, not the indices of the operators, right? Is there some relation with the index? (Please tell me if I'm talking nonsense! I don't really know this K-theory stuff).



Maybe the answer I'm looking for doesn't involve K-theory, anyway. Does anyone have any ideas? I'd love to hear about any and everything!

set theory - Examples of inductive proofs that can be generalized by transfinite induction

0) In Kaplansky's Set Theory and Metric Spaces, he says something like "90% of the time Zorn's Lemma is more natural than transfinite induction. Here is an example of the other 10%." [I don't remember what he proves using transfinite induction -- something about infinite abelian groups, perhaps?]



In my adult mathematical life I have used transfinite induction twice.



1) In



http://math.uga.edu/~pete/ellipticded.pdf



I prove that every abelian group is, up to isomorphism, the Picard group of a ring of functions on some elliptic curve over some field (in particular, of some Dedekind domain). The proof uses transfinite induction. Bjorn Poonen explained to me how to remove transfinite induction from the argument. It seems that Bjorn's modified proof is the first proof of Claborn's theorem (every abelian group is, up to isomorphism, the Picard group of some Dedekind domain) that does not use the Axiom of Choice!



2) To show that if $L/K$ is a purely transcendental extension and $| |$ is a non-Archimedean norm on $K$, then it extends to (at least) one non-Archimedean norm $| |$
on $L$. This was an exercise in a course I am teaching this semester:



http://math.uga.edu/~pete/MATH8410Ex2.3Pete.pdf



I wrote it up mostly because I wanted to give a worked example of a proof by transfinite induction. Later, one of the students gave a proof using Zorn's Lemma that I thought was faster and simpler.



Thus in my experience, transfinite induction proofs are few, far between, and can probably be recast in other terms. Nevertheless sometimes transfinite induction is what one thinks to do first, and anyway it's fun to prove something in this way every once in a while.

moonlanding - Apollo Moon Landing

No, there's not an ounce (gram?) of truth that the Apollo moon landings were faked. The guys on Mythbusters did a good job of busting several of the so-called problems. The light on the astronaut descending the ladder on the lunar lander was light reflected from the white space suit of the astronaut snapping the picture; the shadows that appear to indicate different light sources is actually a matter of topography; missing stars is because of basic photography (the foreground is so bright, the lens has to be stopped down to take the picture, thus not allowing the light from the stars to register on the film); and it is not actually possible to duplicate the 1/6th gravity by simply slowing down the playback speed. Oh...and let's not forget the 100s of pounds of lunar rocks returned from several missions. Here's a link to Mythbusters: http://www.discovery.com/tv-shows/mythbusters/mythbusters-database/apollo-moon-landing-pictures-fake/

Saturday, 24 May 2014

cosmology - Can matter leave the cosmic horizon?

A little bit of caution is needed when approaching this subject as there are several surfaces in cosmological solutions that can be interpreted (or misinterpreted in some cases) as horizons. Not all of them appear in all comsological solutions and in some cosmological solutions some of these surfaces are only present for part of the evolution of the solution.



If you were to mean the particle horizon, which is the limit of the observable Universe. The cosmological red shift at the particle horizon is always infinite, when there is a particle horizon. No matter can ever leave the particle horizon (regardless of the cosmological model, as long as there is a particle horizon) as it expands radially outwards at a local speed of c relative to any matter on the horizon (of course to us it expands even faster than c as it is also receding due to expansion).



An ever-expanding Universe with a cosmological constant approaches a state similiar to de Sitter space in late times, however de Sitter space itself is not a realistic cosmological solution as it is devoid of matter. De Sitter space does not have a particle horizon and for that matter it doesn't have any singularities either.



In de Sitter space the cosmic event horizon, which is the surface beyond which events happening now cannot affect us in the future providing we stay at the same spot (this should not be confused with the particle horizon), sits at a constant distant to the observer that is inversely proportional to the square root of the cosmological constant.



In answer to your question then, de Sitter space is symmetric in time and space which actually gives rise to seemingly different physical interpretations of it. However the black hole information paradox is specifically the loss of unitariness in the time evolution of a black hole and you have not justified why you think an equivalent thing should happen in de Sitter space. The loss of unitariness in actually arises because the event horizon disappears, but (naively at least) the information behind it also disappears. In de Sitter space the event horizon remains and even if it were to disappear by the cosmological constant miraculously going to zero we would still find the information behind it, so there is no analog to the information paradox in de Sitter space.



To take another example, consider Rindler coordinates in Minkowski spacetime, there is a horizon (the Rindler horizon) and there is radiation (Unruh radiation), however there is no information paradox as we know that unitariness is preserved in Minkowski spacetime and when the Rindler horizon is made to disappear by selecting a different coordinate system, all the information that was behind the horizon is still there.

Friday, 23 May 2014

galaxy - Fate of the Spiral Arms of the Milky Way and Andromeda

It indeed appears the Andromeda galaxy (M31) and The Milky Way (MW) are en route to a collision. This will lead to a merger of the two galaxies forming an elliptical galaxy. The flattened disc structure of M31 and MW comes about because their gas dissipated energy but conserved momentum: a disc is the minimum energy state at given angular momentum. The gas in the disc then forms stars, which are more or less on the same orbit as the gas, i.e. circling the respective galaxies. A disc galaxy is dynamically cold, i.e. the speed of the mean motion of the star around the galaxy is much larger than their random motions.



The merger destroys all these structures and replaces them with a much smoother and dynamically hot structure, when the density is roughly constant on (concentric) triaxial ellipsoids. There are several types of stellar orbits in such galaxies, but most important are the so-called box orbits, when the stars oscillate in a large box-like volume and have no preferred mean direction of motion. In the early phases, the elliptical galaxy will have some temporary structures (so-called shells), which are remnants of the merger process itself.



The gas in M31 and MW will most likely be swept into the inner parts of the elliptical where it may form many new stars and contribute to the feeding of the AGN which emerges from the coalescence of the two supermassive black holes of M31 and MW. But energy fed back from the new formed stars via supernovae and stellar winds as well as from the AGN will drive the remaining gas out of the elliptical galaxy, leaving it "red and dead", i.e. without star formation and young blue stars.



The merger of M31 and MW will hardly increase the already neglible rate of stellar collisions. Given the vastness of galaxies compared to the size of stars, such collisions are extremely unlikely. The only places where such collisions occur are the very dense cores of globular clusters.

matter - How much larger would a star have to be to cause thermonuclear reactions if it was made out of mostly rock like Earth, instead of gases?

It really depends what you mean by "rock". At the temperatures and pressures at the cores of stars (and at which nuclear fusion reactions are possible), "rocks" as I suspect you are thinking of, do not exist.



Thermonuclear reactions do not occur because the gas is "flammable", they occur because the kinetic energies of the nuclei in the gas (at these temperatures, atoms are fully ionised) are sufficient to get them close enough together for nuclear fusion to take place.



"Rocks" are made of atoms of silicon and oxygen (for example) in the form if silicates. But these are dissociated at fairly low temperatures compared to the centres of stars. Oxygen and Silicon thermonuclear fusion ignition requires temperatures in excess of $10^9$ K, and these temperatures are only reached late in the lives of stars of mass $>8 M_{odot}$.



It is possible to have stars with solid cores. This is thought to be the fate of white dwarf stars as they cool. Most white dwarfs are made of a mixture of carbon and oxygen and this "crystallises" once the core of the white dwarf cools below about a few million degrees.



Ordinarily, the cores of white dwarfs are inert as far as nuclear reactions are concerned, because the temperatures are too cool. However, if the white dwarf is massive enough (or mass is added to it), then the central densities climb, and for white dwarfs of mass $sim 1.38M_{odot}$ it is thought that the densities become high enough ($sim 10^{13}$ kg/m$^3$) to start nuclear fusion of carbon via the zero point energy oscillations in the crystalline material (so-called pyconuclear reactions). Such reactions in degenerate matter are highly explosive and might result in the detonation of the whole star in a type Ia supernova.

Thursday, 22 May 2014

Will an observer falling into a black hole be able to witness any future events in the universe outside the black hole?


The normal presentation of these gravitational time dilation effects
can lead one to a mistaken conclusion. It is true that if an observer
(A) is stationary near the event horizon of a black hole, and a second
observer (B) is stationary at great distance from the event horizon,
then B will see A's clock to be ticking slow, and A will see B's clock
to be ticking fast. But if A falls down toward the event horizon
(eventually crossing it) while B remains stationary, then what each
sees is not as straight forward as the above situation suggests. As B
sees things: A falls toward the event horizon, photons from A take
longer and longer to climb out of the "gravtiational well" leading to
the apparent slowing down of A's clock as seen by B, and when A is at
the horizon, any photon emitted by A's clock takes (formally) an
infinite time to get out to B. Imagine that each person's clock emits
one photon for each tick of the clock, to make it easy to think about.
Thus, A appears to freeze, as seen by B, just as you say. However, A
has crossed the event horizon! It is only an illusion (literally an
"optical" illusion) that makes B think A never crosses the horizon.



As A sees things: A falls, and crosses the horizon (in perhaps a very
short time). A sees B's clock emitting photons, but A is rushing away
from B, and so never gets to collect more than a finite number of
those photons before crossing the event horizon. (If you wish, you can
think of this as due to a cancellation of the gravitational time
dilation by a doppler effect --- due to the motion of A away from B).
After crossing the event horizon, the photons coming in from above are
not easily sorted out by origin, so A cannot figure out how B's clock
continued to tick.



A finite number of photons were emitted by A before A crossed the
horizon, and a finite number of photons were emitted by B (and
collected by A) before A crossed the horizon.



You might ask What if A were to be lowered ever so slowly toward the
event horizon? Yes, then the doppler effect would not come into play,
UNTIL, at some practical limit, A got too close to the horizon and
would not be able to keep from falling in. Then A would only see a
finite total of photons form B (but now a larger number --- covering
more of B's time). Of course, if A "hung on" long enough before
actually falling in, then A might see the future course of the
universe.



Bottom line: simply falling into a black hole won't give you a view of
the entire future of the universe. Black holes can exist without being
part of the final big crunch, and matter can fall into black holes.



For a very nice discussion of black holes for non-scientists, see Kip
Thorne's book: Black Holes and Time Warps.




From this source

solar system - Retrograde motion and Kuiper Belt Objects

As seen from Earth, planets such as Mars and Jupiter exhibit retrograde motion when they are near opposition (from Earth). I am wondering how this effect extends to very distant objects, such as those in the Kuiper Belt.



Since objects in the Kuiper Belt orbit the Sun at a velocity much slower than Earth (or Mars or Jupiter for that matter) because of Kepler's third law I'd imagine that retrograde motion for these objects is more extreme in the sense that these objects are seen to be in retrograde motion for a longer time and over a larger portion of the sky (in an angular sense) than would be seen for any of the planets.



In an extreme case, is it possible that a very distant Kuper Belt Object (KBO) would never be observed in prograde motion? I imagine that if you were to observe a KBO that was in the direction of Earth's motion then there would be no contribution from the Earth's velocity on the observation and you would be able to see the object in prograde motion, no matter how slow it was travelling. However, such an object would likely be obscured by sunlight, since objects that lie in the direction of Earth's motion lie above the day/night interface on the Earth. My question here is that, given real-life observational constraints (like sunlight) how far from the Sun would an object have to be to never be seen in prograde motion? You can be mathematical.

Wednesday, 21 May 2014

orbit - Is there any point on earth where the moon stays below the horizon for an extended period of time?

Depends on the interpretation of your question...
The best places not to observe the moon are the north and south pole. On the north pole you will only be able to see objects above the celestial equator. As the moon orbits the Earth in one month its orbit is inclined from the celestial equator. This inclination is almost the same as the inclination of the ecliptic (path of the Sun) with the celestial equator. The ecliptic crosses the equator at two opposite points on the celestial sphere. This means that for about half its orbit, the Sun, and as the moon's orbit is near the ecliptic, also the Moon, will be above the ecliptic and therefore visible from the north pole.



That being said, the Moon does not follow the ecliptic precisely as the moon's orbit is inclined from the ecliptic by about 5°. The inclination of the ecliptic is 23°, so during very special circumstances the maximum altitude of the Moon above the horizon on the North pole will be 18° during one month. The duration that the Moon will be above the horizon on the north pole will be about 10 days (a guess) about 13.6 days (edit:see comments below).



So if we interpret your question as: Is there a place where the Moon will be below the horizon for a long period of time (> 1 month), then the answer is NO.



But if this happens near June, then the Sun will also be above the horizon at the north pole (for six months), and as the Moon will be close to the Sun (as it follows more or less the ecliptic), it will be very hard to see the Moon during that time. So if you specifically ask whether the Moon will not be visible for an extended period of time, then the answer is YES.



And there are of course also places with perpetual cloud cover ;-)

Tuesday, 20 May 2014

at.algebraic topology - Motivation for algebraic K-theory?

Algebraic K-theory originated in classical materials that connected class groups, unit groups and determinants, Brauer groups, and related things for rings of integers, fields, etc, and includes a lot of local-to-global principles. But that's the original motivation and not the way the work in the field is currently going - from your question it seems like you're asking about a motivation for "higher" algebraic K-theory.



From the perspective of homotopy theory, algebraic K-theory has a certain universality. A category with a symmetric monoidal structure has a classifying space, or nerve, that precisely inherits a "coherent" multiplication (an E_oo-space structure, to be exact), and such an object has a naturally associated group completion. This is the K-theory object of the category, and K-theory is in some sense the universal functor that takes a category with a symmetric monoidal structure and turns it into an additive structure. The K-theory of the category of finite sets captures stable homotopy groups of spheres. The K-theory of the category of vector spaces (with appropriately topologized spaces of endomorphisms) captures complex or real topological K-theory. The K-theory of certain categories associated to manifolds yields very sensitive information about differentiable structures.



One perspective on rings is that you should study them via their module categories, and algebraic K-theory is a universal thing that does this. The Q-construction and Waldhausen's S.-construction are souped up to include extra structure like universally turning a family of maps into equivalences, or universally splitting certain notions of exact sequence. But these are extra.



It's also applicable to dg-rings or structured ring spectra, and is one of the few ways we have to extract arithmetic data out of some of those.



And yes, it's very hard to compute, in some sense because it is universal. But it generalizes a lot of the phenomena that were useful in extracting arithmetic information from rings in the lower algebraic K-groups and so I think it's generally accepted as the "right" generalization.



This is all vague stuff but I hope I can at least make you feel that some of us study it not just because "it's there".

rt.representation theory - ADE type Dynkin diagrams

You might also take a look at Slodowy, P. (1983), Platonic Solids, Kleinian Singularities and Lie Groups,Lecture Notes in Mathematics, No. 1008, pp. 102- 138,



Arnold's Trinities paper "Polymathematics : is mathematics a single science or a set of arts?",
easily found on the net



and



Chapoton's own trinities page (also found on the net).



The last two focus on the "E" part of ADE, but give long lists of intriguing parallels.

star - Where to get Polaris Right ascension value from

There are several published star catalogs that list right ascensions for stars, along with a lot of other almanac-type data. Wikipedia's a good resource for common stars (e.g. Polaris).



Right ascensions change over time, as stars move through space. Star catalogs will have a year associated with them, which is the year that the measurement was taken.

Monday, 19 May 2014

ag.algebraic geometry - Possible formal smoothness mistake in EGA

EGA IV 17.1.6(i) states that formal smoothness is a source-local property. In other words, a map $Xto Y$ of schemes is formally smooth if and only if there is an open cover $U_i$ ($iin I$) of $X$ such that each restriction $U_ito Y$ is formally smooth.



It seems however that there is a gap in the proof. The problem is in the third paragraph on page 59 (Pub IHES v 32). The reference to (16.5.17) does not give the conclusion they need. Corollary (16.5.18) does give this conclusion, but it requires a finite presentation assumption. (So, everything is OK for smoothness instead of formal smoothness.)



Quesiton 1: Can someone give a counterexample or a complete proof of 17.1.6(i)? (My bet would be that there's a counterexample.)



I think the right way of fixing this is to change the definition of formal smoothness. Recall that a map $Xto Y$ is said to be formally smooth if for any closed immersion $X'to Y'$ of affine schemes defined by a square-zero ideal and for map $X'to X$ and $Y'to Y$ making the induced square commute, there is a map $Y'to X$ commuting with all the other maps in the diagram. I think a better definition would to require only that the map $Y'to X$ exists locally on $Y'$.



If I'm not mistaken, this definition has the following advantages over the old one: a) The definition of smoothness (=formally smooth and locally of finite presentation) would remain unchanged. b) It would make formal smoothness a source-local property. (Or if there is no counterexample to 17.1.16(i), then the argument with the new definition would be much easier than an argument like the one in EGA, in that it would not depend on the facts in scheme theory that the sheaf of lifts $Y'to X$ is a torsor for a sheaf derivations and that therefore, since $Y'$ is affine, there is always a global section.) In particular, it would probably be better suited for maps of general sheaves of sets on the big Zariski topology, rather than just schemes. d) It's a general rule of thumb that, in sheaf theory, it's easier to work with local existential quantifiers than global ones.



Question 2: Does anyone know of any reason why this new definition would be bad?

Saturday, 17 May 2014

pr.probability - Concentration of measure for gaussian inner products

If you have 2 standard Gaussians in $mathbb{R}^n$, their inner product is the sum of $n$ i.i.d. variables, with their common distribution fixed (and having finite moments), so you will get convergence to the appropriate Gaussian distribution in line with the central limit theorem, with exponential bounds coming from Hoeffding's inequality, say. Do you need tight bounds or asymptotics is enough?

gravity - Spacetime curvature illustration accuracy

Newtonian gravity of a point-source can described by a potential $Phi = -mu/r$. If we suppress one spatial dimension and use it to graph the value of this potential instead, we get something that looks very close to this illustration, and is indeed infinitely deep at the center--at least, in the idealization of a point-mass. And farther away from the center, it goes flat, just as many illustrations like this have it.



Illustrations like this are fairly common, and I'm guessing that they're ultimately inspired by the Newtonian potential, because they have almost nothing to do with with the spacetime curvature.



Here's an isometric embedding of the Schwarzschild geometry at an instant of Schwarzschild time, again with one dimension supressed:



enter image description here



Above the horizon (red circle), the surface is a piece of a paraboloid (the Flamm paraboloid). Unlike the potential, it does not go flat at large distances.



Being isometric means that it correctly represents the spatial distances above the horizon at an instant of Schwarzschild time. Below the horizon, the embedding isn't technically accurate because the Schwarzschild radial coordinate does not represent space there, but time. Although if we pretend it's spacelike below the horizon, that would be the correct embedding. Picture the below-horizon part as having one-directional flow into the singularity.



Since we've only represented space and not time, the embedding is not enough to reconstruct the trajectories of particles in this spacetime. Still, it is a more accurate representation of a part of the spacetime curvature of point-source--specifically the spatial part.





The velocity of the object from this perspective, would seem to increase, until a point - where the velocity in x,y coordinates starts to decrease due to most of the motion happening "down" the time dimension. Is this also correct? Would a photon seem to slow down when moving down the well, if seen from above?




The above is an embedding of a slice of spatial geometry, and is not a gravity well. The mathematical form of the paraboloid above the horizon is best described in cylindrical coordinates as
$$r = 2M + frac{z^2}{8M}text{.}$$
Here the vertical $z$ coordinate doesn't mean anything physically. It's purely an artifact of creating a surface of the same intrinsic curvature in Euclidean $3$-space as the $2$-dimensional spatial slice of Schwarzschild geometry.



For the Schwarzschild spacetime, radial freefall is actually exactly Newtonian in the Schwarzschild radial coordinate and proper time, i.e. time experienced by the freefalling object, rather than Schwarzschild time. So the Newtonian gravity well isn't actually a bad picture for the physics--it's just not the geometry and so is not a good representation of how any part of spacetime is curved. For non-radial orbits, the effective potential is somewhat different that than the Newtonian one, but ignoring the effects of angular momentum gets us the Newtonian form.



In Schwarzschild time, yes, a photon (or anything else) does slow down as gets near the horizon. In fact, in Schwarzschild time it never reaches the horizon, which is one indication that the Schwarzschild coordinates are badly behaved at the horizon. The coordinate acceleration actually becomes repulsive close to the horizon--and for a fast enough infalling object, is always repulsive. This can be understood as the particle moving to places with more and more gravitational time dilation. In proper time of any infalling observer, however, close to the horizon the acceleration is always attractive.

Thursday, 15 May 2014

quantum groups - An inner product that makes the R-matrix unitary

I'm pretty late to the party here, and Ben, it seems that you already have a satisfactory answer to your question, but I thought for the sake of completeness I would just post this in case anybody stumbles across this question and is still curious.



The key observation is that the eigenvalues of the braiding induced by the $R$-matrix are of the form $pm q^{t}$, where $2t$ is a rational number in the image of the bilinear pairing of the weight lattice with itself. This follows from a result of Drinfeld which says that the square of the braiding acts as $Delta(C^{-1})C otimes C$, where $C$ is the quantum Casimir element. $C$ acts as the scalar $q^{(lambda, lambda + 2rho)}$ in the irrep $V_{lambda}$, and the statement about the eigenvalues of the braiding follows from this.



If you want the braiding to be unitary, and you want that statement to have some meaning in the category of representations of $U_q(mathfrak{g})$, then you need a $*$-structure on the quantum group. There are three cases to consider: $q in mathbb{R}, q in i mathbb{R}$ and $mathfrak{g}=mathfrak{sp}(2n,mathbb{C})$, and $|q|=1$.



The first two cases are ruled out because the eigenvalues of the braiding will not have modulus one, and hence the braiding cannot be unitary. This leaves only the case $|q|=1$.



If you want this inner product to be natural/compatible with the quantum group infrastructure, then it should probably be invariant under the action of $U_q(mathfrak{g})$ in the sense that $langle av, w rangle = langle v, a^*wrangle$ for $a in U_q(mathfrak{g})$ and $v,w in V otimes V$. Of course a $*$-structure on $U_q(mathfrak{g})$ needs to be specified in order to make sense of this. For $|q|=1$ the standard choice is $K_i^* = K_i, E_i^*=-E_i, F_i^*=-F_i$, although this can also be modified by a diagram automorphism of $mathfrak{g}$.



To find an invariant inner product on $V otimes V$, one way is to take the tensor product of an invariant inner product on $V$ with itself (if $V$ is an irrep then this is unique up to a scalar multiple if it exists). However, after a brief glance in Chari-Pressley, I'm not sure if the finite-dimensional irreps have an invariant inner product for the case $|q|=1$. I didn't do any further checking, but I would be interested to know if there are invariant inner products for irreps in that case.



Also, this doesn't rule out the possibility of inner products on $V otimes V$ that do not come from inner products on $V$.



Summary of answer: the $R$-matrix cannot be made unitary in any inner product for $q in mathbb{R}$ or $q in i mathbb{R}$. For $|q|=1$ the situation is unclear to me.

Algebraic Geometry versus Complex Geometry

I am fairly certain that there is no complex analytic proof of the following theorem (but I would love to be proven wrong!). This is not strictly speaking an answer to the question, because the available proof is not exactly algebraic either; rather, it uses $p$-adic (analytic) methods.




Theorem. (Batyrev) Let $X$ and $Y$ be birational Calabi–Yau varieties (that is, smooth projective over $mathbb C$ with $Omega^n cong mathcal O$). Then $H^i(X,mathbb C) cong H^i(Y,mathbb C)$.




The same methods were later refined to prove the following theorem:




Theorem. (Ito) Let $X$ and $Y$ be birational smooth minimal models (that is, smooth projective over $mathbb C$ with $Omega^n$ nef). Then $h^{p,q}(X) = h^{p,q}(Y)$ for all $p,q$.




Again, the proof goes through $p$-adic analytic methods, this time combined with $p$-adic Hodge theory (which I think counts as an algebraic method).



References.



V. V. Batyrev, Birational Calabi–Yau $n$-folds have equal Betti numbers. arXiv:alg-geom/9710020



T. Ito, Birational smooth minimal models have equal Hodge numbers in all dimensions. arXiv:math/0209269

ag.algebraic geometry - Quasi-projective orbifolds and algebraic line bundles

The notion of quasi-projective orbifold is generally accepted to contain at least the following: let $X$ be a (simply-connected) complex manifold, $G$ a group acting on $X$ by biholomorphisms, and containing a finite-index subgroup $G' subset G$ acting freely on $X$ so that $X/G'$ is a quasi-projective variety. Then the orbifold $X/!/G$ is quasi-projective.



If $G$ does not act effectively on $X$ (with $H subset G$ the finite normal subgroup of elements which act trivially everywhere) it is "unlikely" that one can find a finite-index $G'$ acting freely. Suppose that $X/!/(G/H)$ is a quasi-projective variety in the above sense: should $X/!/G$ be considered quasi-projective?



Question 1 : If $G$ does not act effectively on $X$, can $X/!/G$ be ever considered a quasi-projective orbifold in any sense?



In the above case, an algebraic line bundle on $X/!/G$ can be taken to be a $G$-equivariant holomorphic line bundle on $X$ such that induced line bundle on $X/G'$ is algebraic.



Question 2 : What should one take as definition of algebraic line bundle if $X/!/G$ is not effective?

ag.algebraic geometry - What is etale descent?

Let $K/k$ be a finite separable extension (not necessarily galois) and $Y$ a quasi-projective variety over $K$.



The functor $k-Alg to Sets:A mapsto Y(Aotimes_k K)$ is representable by a quasi-projective $k$-scheme $Y_0=R_{K/k}(Y)$.
We have a functorial adjunction isomorphism
$Hom_{k-schemes}(X,R_{K/k}(Y))=Hom_{K-schemes}(Xotimes _k K,Y)$



and the $k$-scheme $Y_0=R_{K/k}(Y)$
is said to be obtained from the $K$-scheme $Y$ by Weil descent.
For example if you quite modestly take $X=Spec(k)$, you get
$(R_{K/k}(Y))(k)=Y_0(k)=Y(K)$, a formula that Buzzard quite rightfully mentions.
If $Y=G$ is an algebraic group over $K$, its Weil restriction $R_{K/k}(G)$ will be an algebraic group over $k$.



As the name says this is due (in a different language) to André Weil: The field of definition of a variety. Amer. J. Math. 78 (1956), 509–524.



Chapter 16 of Milne's online Algebraic Geometry book is a masterful exposition of descent theory, which will give you many properties of $(R_{K/k}(Y))(k)$ (with proofs), and the only reasonable thing for me to do is stop here and refer you to his wondeful notes.

pr.probability - Sampling from Sine Kernel and Airy Kernel

For a general algorithm for simulating points from a determinantal process, see Algorithm 18 in the paper "Determinantal Processes and Independence" by Hough, Krishnapur, Peres and Virag:



arXiv link



This algorithm was actually implemented by some physicists at Princeton (I believe) but I am not sure if their code is publicly available.



For the sine kernel, depending on how many points you want to sample, Matlab is pretty good at computing eigenvalues of a large GUE matrix in a decently short amount of time. That would require much less work than implementing the algorithm above.

nt.number theory - When is the extension defined by an Eisenstein polynomial galoisian or abelian or cyclic ?

Let $p$ be a prime number, $K$ a finite extension of $mathbb{Q}_p$, $mathfrak{o}$ its ring of integers, $mathfrak{p}$ the unique maximal ideal of $mathfrak{o}$, $k=mathfrak{o}/mathfrak{p}$ the residue field, and $q=operatorname{Card} k$.



Recall that a polynomial $varphi=T^n+c_{n-1}T^{n-1}+cdots+c_1T+c_0$ ($n>0$) in $K[T]$ is said to be Eisenstein if $c_iinmathfrak{p}$ for $iin[0,n[$ and if $c_0notinmathfrak{p}^2$.



Question. When is the extension $L_varphi$ defined by $varphi$ galoisian (resp. abelian, resp. cyclic) over $K$ ?



Background. Every Eisenstein polymonial $varphi$ is irreducible, the extension $L_varphi=K[T]/varphi K[T]$ is totally ramified over $K$, and every root of $varphi$ in $L_varphi$ is a uniformiser of $L_varphi$. There is a converse.



If the degree $n$ of $varphi$ is prime to $p$, then the extension $L_varphi|K$ is tamely ramified and can be defined by the polynomial $T^n-pi$ for some uniformiser $pi$ of $K$. Thus $L_varphi|K$ is galoisian if and only if $n|q-1$, and, when such is the case, $L_varphi|K$ is actually cyclic.



Real question. Is there a similar criterion, in case $n=p^m$ is a power of $p$, for deciding if $L_varphi|K$ is galoisian (resp. abelian, resp. cyclic) ?

What can be the analogue of Frobenius in complex geometry?

While this is really more of a non-answer to your question (I'm not really fond of such philosophical generalities), I think part of what sets complex geometry apart is the lack of a Frobenius.



If one wants to make use of the Frobenius in studying complex geometry, it is still possible to do so by reducing to characteristic $p$. For instance, the simplest proof of the next result uses this method (the following discussion is a summary from the beautiful book "Higher-Dimensional Algebraic Geometry" by Debarre).



Theorem. If $X$ is a smooth Fano variety (i.e. $-K_X$ is ample) over $mathbb{C}$, then through any point of $X$ there is a rational curve.



It is relatively easy to prove this result in characteristic $p$ by making use of the Frobenius. For an outline, first fix a point $xin X$. If you can produce a map $f:Cto X$ from a pointed curve $(C,c)$ taking $c$ to $x$ in such a way that this map can be deformed while holding $f(c)=x$ constant, then by Mori's bend-and-break lemma it is possible to find a rational curve through $c$. Now we can estimate the dimension of the space of such deformations, and it follows that if $$-K_X cdot f_ast C - n g(C) geq 1 qquad (n=dim X)$$ that we will be done.



Now the question becomes: how can we produce a curve $C$ (or rather a map $f$) such that this inequality holds? If you play around in complex geometry, you will find that standard constructions, such as taking branched covers of a given curve, won't get you anywhere: if you manage to increase the $(-K_X)$-degree $-K_Xcdot f_ast C$, then this will lead to an increase in the genus $g(C)$ as well, and you won't have made any progress towards proving the inequality.



However, in characteristic $p$, it is possible to increase $-K_Xcdot f_ast C$ while keeping the genus of $C$ fixed! If we start from any random map $f:Cto X$ with $f(c)=x$, we can precompose it as many times as we wish by the Frobenius $Frob:Cto C.$ This has the effect of multiplying $-K_Xcdot f_ast C$ by $p$, while the genus $g(C)$ remains constant since the domain curve is the same (although different as a $k$-scheme). Then since $-K_X$ was ample, it follows that if we compose with the Frobenius enough times our inequality will be satisfied.



Proving that the characteristic $p$ case implies the characteristic $0$ case is a relatively formal argument; I'll refer the reader to Debarre to see it.

Wednesday, 14 May 2014

soft question - Advice on changing topic for thesis problem

I say stick with the problem and learn to love it. Since mathematics is so interconnected I'm sure you'll find your way.



It would help to learn about the history and context of the problem. Why did people invent this big machine? How does it relate to cool things in math that you do like?



Suppose you would change topic, how would this be different? You can always say something is too technical and not of interest. It's far better to take your topic and find a piece that does seem interesting and is not too technical (when looked from the right angle).



[EDIT: Douglas S. Stones]: Doing a Ph.D. requires hard work over a long period of time. The typical candidate needs to meet fairly high expectations, without understanding what these expectations are (e.g. what makes a thesis well-written?). It's very easy to become demoralised (as I became at some points during my candidature). It's not going to be possible to complete a Ph.D. without determination. So I also recommend the stick-with-it approach (although I do not know the particulars of your situation).



Furthermore, there's nothing stopping a Ph.D. candidate from studying other topics alongside their thesis topic (in fact, I think this should be encouraged to some degree).

gr.group theory - Infinite direct products and derived subgroups

Suppose $G_1, G_2, dots, G_n, dots$ are groups (I use countable sequences, though the question is also applicable for uncountable collections of groups). Suppose G is the unrestricted external direct product of the $G_i$s. What is the derived subgroup (commutator subgroup) of G?



I get two things:



  1. The derived subgroup of G contains the restricted direct product of the derived subgroups of the $G_i$s, i.e., the set of elements with finitely many nonzero coordinates where each coordinate is in the derived subgroup of the corresponding group.

  2. The derived subgroup of G is contained in the unrestricted direct product of the derived subgroup of the $G_i$s, i.e., the set of elements where all coordinates are in the derived subgroups of their respective groups.

A more precise characterization of the derived subgroup seems to be: the set of those elements for which there exists some finite number c (depending on the element) such that every coordinate can be expressed as a product of at most c commutators in its coordinate group $G_i$.



Is this characterization correct? Also, under what conditions does the derived subgroup become either of the two extremes (the restricted direct product, or the unrestricted direct product)?



Also, the above suggests that there could be examples where all the $G_i$s are perfect groups [ADDED: A perfect group is a group that equals its own derived subgroup; however, it is not necessary that every element be a commutator] but their unrestricted direct product is not a perfect group. Could somebody come up with an example of this sort?

ag.algebraic geometry - Is a 'generic' variety nonsingular? Or singular?

Executive summary: If you look at the whole Hilbert scheme associated to a given polynomial, the locus of points corresponding to nonsingular (which I take to mean smooth) subschemes can sometimes be very small in terms of dimension and number of irreducible components. So in this sense, most subschemes are singular.



Details: The Hilbert scheme $operatorname{Hilb}^P_{mathbf{P}^n}$ associated to a given Hilbert polynomial $P$ is connected (a theorem of Hartshorne), but in general it has many irreducible components, each with its own generic point. Thus there are several different "generic" closed subschemes with the same Hilbert polynomial, each a member of a different family.



The locus of points in the Hilbert scheme corresponding to smooth (=nonsingular) subschemes of $mathbf{P}^n$ is a Zariski open subset, which implies that it is Zariski dense in the union of the components that it meets, but there are often other components of the Hilbert scheme all of whose points correspond to singular subschemes.



Because the Hilbert scheme need not have a single generic point, one might ask: How many of these generic points parametrize singular subschemes, and what are the dimensions of the corresponding components of the Hilbert scheme?



As a case study, consider the Hilbert scheme $H_{d,n}$ of $d$ points in $mathbf{P}^n$, i.e., the case where $P$ is the constant polynomial $d$. Points of $H_{d,n}$ over a field $k$ correspond to $0$-dimensional subschemes $X subseteq mathbf{P}^n$ of length $d$, or in other words, such that $dim_k Gamma(X,mathcal{O}_X) = d$. Each smooth $X$ with this Hilbert polynomial is a disjoint union of $d$ distinct points. These smooth $X$'s correspond to points of an irreducible subscheme of $H_{d,n}$, and the closure of this irreducible subscheme is a $dn$-dimensional irreducible component $R_{d,n}$ of $H_{d,n}$. Sometimes $H_{d,n}=R_{d,n}$, which means that every $X$ is smoothable. But for each fixed $n ge 3$, Iarrobino observed that $dim H_{d,n}$ grows much faster than $dim R_{d,n}$ as $d to infty$. (He proved this by writing down large families of $0$-dimensional subschemes, like $operatorname{Spec} (k[x_1,ldots,x_n]/mathfrak{m}^r)/V$, where $mathfrak{m}=(x_1,ldots,x_n)$ and $V$ ranges over subspaces of a fixed dimension in $mathfrak{m}^{r-1}/mathfrak{m}^r$.) This shows that $H_{d,n}$ is not irreducible for such $d$ and $n$, and that the ``bad'' components all of whose points parametrize singular subschemes can have much larger dimension than the one component in which a dense open subset of points parametrize smooth subschemes. With a little more work, one can show that the number of irreducible components of $H_{d,n}$ can be arbitrarily large (and as already remarked, the components themselves can have larger dimension than $R_{n,d}$). So in this sense, one could say that for $n ge 3$, most $0$-dimensional subschemes in $mathbf{P}^n$ are singular.



For more details about $H_{d,n}$, including explicit examples of nonsmoothable $0$-dimensional schemes, see the following articles and the references cited therein:



The moduli space of commutative algebras of finite rank



Hilbert schemes of 8 points



(Warning: my notation $H_{d,n}$ is different from the notation of those articles.)

observation - What uncertainty does an error bar signify in astronomy?

The most common way to represent uncertainty is with symmetric error bars around a central point. This is in turn commonly interpreted as the 95 % confidence interval. Ie, the actual data point is the centre of a Gaussian which has a width at 95 % of it's height as the size of the error bars.



This is only statistical uncertainty and often not explicitly stated. One also refers to measurement and discovery with different confidence intervals... discovery is commonly only claimed when there is a a 5-sigma confidence interval. Ie, if the measurement lies more than 5 widths of the peak away from the theory or prediction, you've made a discovery.



Note, we are leaving out here systematic uncertainty and instument bias, which can only increase the total uncertainty. Usually it is assumed that there is no correlation, so they are combined using the sum of their squares.



Long story short - always ask what the error bars represent, especially if they look too "clean".

Tuesday, 13 May 2014

set theory - Sigma algebra without atoms ?

In your second question, you are asking merely for an atomless Boolean algebra, of which there are numerous examples. One easy example related to the one given on the Wikipedia page is the collection of periodic subsets of the natural numbers N. That is, the subsets $Asubset N$ such that there is a finite set $asubset k$ for some $k$ and $kn+min A$ if and only if $min a$ for $m<k$. This is closed under finite intersections, unions and complements, but has no atoms, since any nonempty periodic set can be made smaller by reducing it to a set with a larger period.



For your main question, a similar idea works, when generalized to the transfinite, to produce the desired atomless $sigma$-algebra. Namely, consider the collection of periodic subsets of $omega_1$, the first uncountable ordinal. This ordinal is bijective with the set of reals, if the Continuum Hypothesis holds, but in any case (under AC), it is bijective with a subset of the reals, so one can take the underlying set of points here to be a set of real numbers. By periodic here I mean the collection of sets $Asubset omega_1$, such that there is a set $asubsetalpha$ for some countable ordinal $alpha$, such that $alphabeta+xiin A$ if and only if $xiin a$, where $xi<alpha$. Note that if $alpha$ is fixed, every ordinal has a unique representation as $alphabeta+xi$ for $xi<alpha$, since this is just dividing the ordinals into blocks of length $alpha$. This means that $A$ consists of the pattern in $a$ repeated $omega_1$ many times. This collection of sets is easily seen to be a Boolean algebra and atomless, but it is also $sigma$-closed, since for any countably many such $A$, I can find a common countable period since the collection of multiples of any fixed $alpha$ form a club subset of $omega_1$. Thus, the intersection (or union) is again periodic, as desired. There are no atoms, since any nonempty periodic set can be made smaller by reducing it to have a larger period.



There are numerous atomless Boolean algebras arising in the forcing technique of set theory, used by Cohen to prove the independence of the Continuum Hypothesis and many others subsequently.

matrices - approximate matrix diagonalization algorithm

You might consider the "QR algorithm": given A, factor A as QR (Q orthogonal and R triangular), then let A' = RQ. Repeat with A' as the new A, ad infinitum.



In a way, though, all implementable diagonalization algorithms are approximate, since it's impossible to diagonalize a general matrix in a finite number of elementary operations.

ag.algebraic geometry - degree of pull-back via F of an hyperplane vs degree of defining polynomials of F

Let X be a nice projective variety (say, smooth and defined over the complex numbers) embedded in some projective space, and let $F:Xtomathbb{P}^n$ be a rational map given by $[f_0:f_1:cdots: f_n]$ where the $f_i$ are coprime homogeneous polynomials of degree d (of course, there is just one way to choose them satisfying this, up to constants factors). Consider H the hyperplane divisor $mathbb{P}^n$, and H' the hyperplane divisor on X.
It seems that F*(H)=dH', at least in this case. I have two questions about this situation;
(1)What is the name of this number? I couldn't find it anywhere.
(2)What is the relation between this number and the degree of the map when F is finite (if any)? One may think they are the same but the Segre embedding shows that this is not the case.
Thanks in advance.

big picture - Natural setting for characteristic classes?

Here is a perspective that might help to put characteristic classes into a more general framework. I like to think that there are two levels of the theory. One is geometric and the other is about extracting information about the geometry through algebraic invariants.
Bear with me if this sounds to elementary and obvious at first.



  1. The geometric side: We have some class of bundle type objects which admit a theory of classifying spaces. This allows us to swap bundles over $X$ for maps of $X$ into some fixed space, which I will call $B$ for the moment. Equivalent bundles over $X$ give equivalent maps to $B$.


  2. The algebraic side: We study maps from $X$ to $B$ by looking at their effect on some type of cohomology theory. The point is that we push the problem of studying maps $X to B$ forward into an algebraic category where we have a better hope of extracting information.


The passage from geometric to algebraic certainly throws some information away; this is the price for moving to a more computable setting. But in the right circumstances the information you want might still be available.



Now, a general framework for this might be the following. Bundles in the abstract are objects that are local over the base and can be glued together. This is precisely what stacks are meant to describe. So think of bundles simply as objects that are classified by maps of $X$ to some stack. This can make sense in any category where you have a notion of coverings (a Grothendieck topology), so we don't have to stick with just ordinary topological spaces here. If you know how to talk about coverings of chain complexes then you can probably make a chain level version. But more concretely, we could also be talking about principal $G$-bundles for just about any sort of a group $G$. Or we could talk about fibre bundles with fibre of some particular type (in my own work, surface bundles come up quite a lot).



As an aside, if you happen to be working with spaces and you want to get back to the usual setting of classifying spaces like grassmannians and $BO$ or $BU$ then there is a way to get there from a classifying stack. Take its homotopy type; i.e, if $B$ is a stack, then choose a space $U$ and a covering $U to B$, then form the iterated pullbacks $Utimes_B cdots times_B U$ which give a simplicial space - the realization of this simplicial space will be the homotopy-theoretic classifying space).



Now, we have some class of bundle objects classified by a stack $B$. To have a "useful" theory of characteristic classes we need a cohomology theory in this category for which



  1. We can compute enough of the cohomology of $B$ and the map induced by $X to B$.

  2. Enough information is retained at the level of cohomology to tell us things we want to know about morphisms $X to B$.

It is very much an art to make a choice of cohomology theory that helps with the problem at hand.



I just want to point out that if you are working with vector bundles, then you needn't think of characteristic classes only as living in singular cohomology classes. A vector bundle represents a K-theory class, and you can think of that class as the K-theory characteristic class of the bundle.



Addendum: Just to say something about why we work with things like $BO$ instead of $BO(n)$, let me point out that it is a matter of putting things into the same place so we can compare them. Real rank n vector bundles have classifying maps $BO(n)$, and if you want to compare a map to $BO(n)$ with a map to $BO(m)$ then a natural thing to do is map them both to $BO(n+m)$. And then, why not go all the way to $BO(infty)=BO$? It's just a matter of not having to compare apples and oranges.

gn.general topology - Something like Yoneda's lemma

This is inspired by The Whitehead for maps question.



Consider two maps f, g: Xto Y which happen to induce the same maps (of discrete spaces) [Z, X] to [Z, Y] for every Z. Does this mean f and g are homotopic?



And what would be the lessons from the answer to this question? I feel like there's something interesting about the way we should ask it.

soft question - Digital Pen for Math: Your Experiences?

I have some experience with both the situation posed by the question and Matt Noonan's modification in the comments.



Note taking in seminars: At a recent conference, I took notes directly on to my computer using a graphics tablet in conjunction with the program xournal. I have gotten a little frustrated in the past at having stacks of notes from seminars that are virtually useless to me because I'm rubbish at organising them and finding them again. The main benefit of writing them directly on to my computer was that I could then add the files to my reference database (refbase) where I could store the meta-data in searchable form. To emphasise that point, it is the meta-data that is searchable, not the information from the original talk.



I found a few other side-benefits from this method. I was slightly faster in writing on a graphics tablet than on paper; I think that that is to do with a different posture and the fact that to go from looking at the computer screen to looking at the board is quicker than going from hunched over paper to looking at a board. This might not be the same for something like a digital pen, though. I think I was also faster because I was less bothered about what my notes looked like - when writing on real paper, I try not to waste paper so if the lecture is getting near the end, I'll cram the last bit on the small bit left of the current page rather than start a new one; on the computer, I just make the page bigger. It's also easier to correct things, and to add more space in the middle of something already written (when the lecturer goes back and adds yet more symbols to the diagram!).



The only drawback at that conference was when my computer ran out of battery power 5 minutes before the end of the seminar and I lost the whole lot because the program didn't have an auto-save feature! (I wrote one that evening - isn't open source wonderful! - but I see now that the latest version of xournal as auto-save anyway).



Another drawback is simply the amount of desk space that this system takes up. I couldn't do this at another recent conference because it was in one of those auditoria where there is a tiny side desk which forces you to write on postage stamps.



I did think of trying to combine LaTeXing seminar notes with a graphics tablet: the idea being to have a LaTeX document in to which you type the majority of the notes, but then overlay diagrams (or other annotations) with the pen. While I think that this would be feasible, the software needs a little tweaking for all of this to work seamlessly.



Incidentally, decent graphics tablets are extremely cheap (I got mine for about 35 quid), and it doesn't take much to get used to writing on a different place to where the text appears, so if you're not sure but think it's something you'd like to try, I'd recommend a simple graphics tablet over getting something more expensive in the first instance. Also, a graphics tablet is great if you want to put fancy animations in your talks.



Giving Seminars/Lectures Using a Beamer+Pen: I'm now in the course of my second course doing this. I prepare the lectures using LaTeX+beamer but when I give them then I have a graphics tablet to hand to make annotations as necessary. Sometimes I leave whole problems to be "done live" - by the students, that is. The first time through, I used my graphics tablet and xournal. This time, the lecture hall that I'm in has a PC with a "write-on screen". Unfortunately, it runs Windows but fortunately, there's jarnal. There is a "new page" facility which brings up a blank slide on which you can write whatsoever you like. After the lecture, I put the annotations on the web page for the students.



I should say that there are several reasons that I've switched from chalk to presentations for lecturing.



  1. I give better lectures/seminars when I use a computer than when I use chalk. This is (I think) because it forces me to prepare the whole seminar/lecture properly in advance and not think "I know how to do that" without carefully checking that I really do.


  2. Chalk dust irritates my skin.


  3. I teach in English but my students listen in Norwegian. If I gave the traditional "copy down everything I write on the board" lecture then the lag time while I waited for them to copy stuff down would be too great (I mean that the time that I assign for the students to catch up with copying down and be ready to listen to me again is significantly longer because they are reading, writing, and hearing in a non-native language). So I make the notes available in advance so that they have a baseline of what I'm going to say and can add to it as they feel the need.


  4. I can actually look at the students and see their reactions while I'm lecturing instead of spending half the time looking at the board. That means that I can be far more reactive in what I'm actually saying.


The main drawback is the increased preparation time. But, due to 1, I'm not sure how much of that is because it forces me to prepare properly and how much is due to the nature of the preparation. An additional benefit will be that I have much less preparation to do next time I teach those courses (I say "will" because I've yet to repeat one of these courses. Perhaps I should say "hope").



You can see what it all looks like by looking at the course webpages for these two courses, they are here for the completed course and here for the current course.



Note: I have lots of macros that interact nicely with beamer and make life considerably easier for myself. I also ended up hacking xournal a little to make it better for note-taking. Same again for refbase to make it suitable for mathematicians (it was written by geologists). I'm happy to share any of these modifications, of course.

Monday, 12 May 2014

gt.geometric topology - Freedman's work on non-simply-connected 4-manifolds

In the late 1970's and in the 1980's, Michael Freedman showed a relationship between the topological surgery problem in 4-dimensions, the slice problem for links, and the classification of non-simply-connected 4-manifolds. He also showed the failure of 4-dimensional homology surgery and the homology splitting theorem via a construction I don't really follow (because I haven't read and don't know the original reference, for one thing). I know of these results vaguely but do not understand them (certainly not the proofs). In particular, I don't know the "canonical" references, and MathSciNet doesn't seem to be helping me. They look like basic results in 4-manifold topology, which should surely be standard. Is there a survey paper on this stuff, or is it in some book? What is the best reference?
I should know this (or look it up myself), but perhaps it is more useful to post it here, because perhaps I am not alone in my confusion.

earth - Why do stars appear to twinkle?

Stars tend to twinkle for two main reasons: first, stars are very far away (the closest star is about 4 light-year from the Sun) and are therefore seen as point sources. Second, Earth has an atmosphere. Earth's atmosphere is turbulent, and therefore all images view through it tends to "swim". Therefore, sometimes a single point in "object space" is mapped to several points in "image space", and sometimes it is not mapped at all. Since stars are seen as single points, they sometimes seems brighter, sometimes even seems to disapear.



If you look at it in another planet of our solar system, it will depends on the planet's own atmosphere. If you look at stars on Mars, the atmosphere being very thin, the stars won't twinkle that much. Same for Mercury. On Venus, the atmosphere is so thick than you won't see anything apart from the atmosphere itself (if you are not crunched by the atmosphere pressure, by the way...).

comets - Where does the dust on 67P/Churyumov-Gerasimenko come from?

Your assumption that solar wind should have blown the dust away is only valid for comets that have a close perihelion. But you are correct in that there is a process which regenerates dust.



From Wikipedia




A comet will experience a range of diverse conditions as it traverses its orbit. For long period comets, most of the time it will be so far from the Sun that it will be too cold for evaporation of ices to occur. When it passes through the terrestrial planet region, evaporation will be rapid enough to blow away small grains, but the largest grains may resist entrainment and stay behind on the comet nucleus, beginning the formation of a dust layer. Near the Sun, the heating and evaporation rate will be so great, that no dust can be retained. Therefore, the thickness of dust layers covering the nuclei of a comet can indicate how closely and how often a comet's perihelion travels are to the Sun. If a comet has an accumulation of thick dust layers, it may have frequent perihelion passages that don't approach the Sun too closely.



A thick accumulation of dust layers might be a good description of all of the short period comets, as dust layers with thicknesses of order meters are thought to have accumulated on the surfaces of short period comet nuclei. The accumulation of dust layers over time would change the physical character of the short-period comet. A dust layer both inhibits the heating of the cometary ices by the Sun (the dust is impenetrable by sunlight and a poor conductor of heat), and slows the loss of gases from the nucleus below. A comet nucleus in an orbit typical of short period comets would quickly decrease its evaporation rate to the point that neither a coma or a tail would be detectable and might appear to astronomers as a low-albedo near-Earth asteroid.


career - Where are mathematics jobs advertised if not on mathjobs (e.g. in Europe and elsewhere)?

My impression is that in the US, there is a canonical place for finding math jobs, namely mathjobs.org. For those of us who live and apply for jobs elsewhere, life is more complicated, and searching for advertised academic mathematics jobs for example in Europe can be a real hassle, with loads of different sites, different systems, and some jobs apparently advertised only on the web page of the hiring institution, or one some obscure mailing list.



So, where are academic math jobs advertised when they for some reason are not or cannot be on mathjobs.org? Of course I know of a few such places, but I am sure there must be many more.



All answers welcome, this would help me and probably many others.

Sunday, 11 May 2014

observation - Why does the moon sometimes appear giant and a orange red color near the horizon?

Harvest Moon (Source, Wikipedia Commons)



The moon is generally called a "Harvest Moon" when it appears that way (i.e. large and red) in autumn, amongst a few other names. There are other names that are associated with specific timeframes as well. The colour is due to atmospheric scattering (Also known as Rayleigh scattering):




may have noticed that they always occur when the Sun or Moon is close to the horizon. If you think about it, sunlight or moonlight must travel through the maximum amount of atmosphere to get to your eyes when the Sun or Moon is on the horizon (remember that that atmosphere is a sphere around the Earth). So, you expect more blue light to be scattered from Sunlight or Moonlight when the Sun or Moon is on the horizon than when it is, say, overhead; this makes the object look redder.




As to the size, that is commonly referred to as the "Moon Illusion", which may be a combination of many factors. The most common explanation is that the frame of reference just tricks our brains. Also, if you look straight up, the perceived distance is much smaller to our brains than the distance to the horizon. We don't perceive the sky to be a hemispherical bowl over us, but rather a much more shallow bowl. Just ask anyone to point to the halfway point between the horizon and zenith, and you will see that the angle tends to be closer to 30 degrees as opposed to the 45 it should be.



University of Wisconsin discussion on the Moon Illusion.



NASA discussion on moon illusion.



A graphical representation of this:



Optical Illusion illustrated



Dr. Phil Plait discusses the illusion in detail.

ct.category theory - Must the left and right unitors of a monoidal category coincide at the neutral object?

The coherence conditions for a monoidal category assure that if an isomorphism of two expressions follows from the built-in natural isomorphisms, then it is the unique such isomorphism. In particular, there is a unique isomorphism $Iotimes I to I$ that can be constructed only from the unitors and the associator. (There may be plenty of other isomorphisms $Iotimes I to I$ in your particular monoidal category; there is a unique one in the free monoidal category.)



This is a general philosophy in n-category theory. A collection of "coherence" axioms are "good" if they imply that the space of choices is contractible. Recall that an $n$-category is contractible if it is nonempty, it is an $n$-groupoid, and for each object (it suffices to check at any particular object) the endomorphisms of that object are a contractible $(n-1)$-category. I.e. a contractible $n$-category is one that's $n$-equivalent to ${rm pt}$. The theory of monoidal categories is an example of a theory with "good" coherence axioms: the space of isomorphisms that follow from the monoidal structure between any two objects is contractible. Other examples of "good" theories include the usual theory of (only weakly associative) 2-categories, and Lurie's theory of $(infty,1)$-categories.



Unfortunately, I don't know what a "premonoidal category" is, so I don't know if its coherence axioms are good in the above sense.

Saturday, 10 May 2014

big list - Books you would like to see retranslated.

I nominate Felix Klein's Lectures on the Icosahedron and the Solution
of Equations of the Fifth Degree
as a book that deserves retranslation.
The present English translation was made in 1888, and it contains a lot of
archaic terminology, such as "permutable" for "commuting," "transformation"
for "conjugation," and "associates" for "conjugates." Also confusing, though
in principle a good idea, a normal subgroup is called "self-conjugate."



Best of all, a new edition would give an opportunity to
introduce some pictures, which are incredibly absent from Klein's original text.

solar system - Finding the temperature of Earth from temperature of Mars and its distance from Sun

If you use no additional data you assume that in all respects other than distance from the star the planets are the same. Then assuming thermal equilibrium the energy radiated bt the planet is proportional to the stellar intensity at the planets distance. Also assume that the Stefan–Boltzmann law applies to the energy radiated from the planets (so it's proportional to the fourth power of the temperature). The intensity of the stellar radiation at the planets distance is inversely proportional to the square of the distance.



This gives us the equation:
$$
frac{T_1^4}{T_2^4}=frac{I_1}{I_2}=frac{R_2^2}{R_1^2}
$$
which simplifies to:
$$
T_1=sqrt{frac{R_2}{R_1}}T_2
$$



Now using the data provided we get:
$$
T_1=sqrt{1.524}times 210 approx 259 text{K}
$$



Which is of course much less than the average temperature of the Earth since not all else is the same in reality.



One of the significant differences is the Earth has a more massive atmoshpere than Mars, and it has a significant grenhouse effect on the temprature of the Earth. Also the Earth has a higher albedo than Mars which would reduce the equilibrium temperature from that calculated. I expect there are other factors that also contribute but I will leave those to others to mention.

amateur observing - International Space Station

To a first-order approximation, the ISS (as all satellites do), orbits in a fixed plane around the planet. As the orbital period is about 90 minutes, and observational conditions that allow viewing usually last longer than that, you might expect that if you could see it at one time, you would be able to in the future.



There are two major effects that change the relationship of the orbit with respect to twilight locations on the earth. The first is the orbit of the earth around the sun. If the plane of the orbit were fixed, this would change where the orbit encountered twilight with a yearly cycle.



The stronger effect though is precession. At the altitude of the ISS, the non-spherical mass of the earth causes the plane of rotation to move (about 5 degrees a day). So it takes a bit more than 2 months to precess all the way back around. Depending on your latitude, you should have two good periods of viewing during the full precession.



This means that for periods of weeks or so, the ISS will be passing overhead during unobservable times (middle of the day/middle of the night). As the orbit continues to precess, it will begin to pass overhead closer to twilight and you'll have opportunities for viewing based on the specifics of the orbit.

von neumann algebras - What is the relationship between algebraic geometry and quantum mechanics?

There is a very nice paper by Martin Schlichenmaier available at arXiv math/000528, where the author discusses the relationship between quantization on Kaehler manifolds and projective geometry. Basically, a (quantizable) Kaehler manifold M can be embedded into a complex projective space by the Kodaira embedding theorem. The quantum Hilbert space becomes the projective coordinate ring of M.

nt.number theory - A special integral polynomial

Yes, it is always possible.



First note that it suffices to construct a totally complex Galois extension $K/mathbb{Q}$ of degree $2n$ with Galois group $S_{2n}$. By the primitive element theorem, this extension is of the form $mathbb{Q}[t]/(f(t))$ for some irreducible polynomial $f$, the minimal polynomial of an algebraic number $alpha in K$. Then there exists $n in mathbb{Z}^+$ such that $n alpha$ is an algebraic integer: take the minimal polynomial of that algebraic integer: it generates the same field extension.



To construct the desired extension $K$, in turn it suffices to find an irreducible polynomial with $mathbb{Q}$-coefficients with no real roots and whose Galois group is the largest possible $S_{2n}$. This is possible by a weak approximation / Krasner's Lemma argument. I will just sketch it for now; I can fill in more details if needed. The idea is to find a finite set of primes $p$ and degree $2n$ polynomials $f_p$ such that the Galois group of $f_p$, as a group of permutations on the roots of $f_p$, is of a certain form (e.g. contains a specific transposition). Also let $f_{infty}$ be any degree $2n$ polynomial over $mathbb{R}$ without real roots. Then by Krasner's Lemma, there exists a polynomial $f$ which is sufficiently $p$-adically close to each $f_p$ and to $f_{infty}$ to have the same local behavior: in particular, to factor the same way over $mathbb{Q}_p$ and over $mathbb{R}$ and to generate the same local Galois groups. Then, by identifying the local Galois groups with decomposition groups at $p$ (of unramified extensions), if one has enough primes so as to get permutations of every possible cycle type, then the global Galois group of $f$ certainly must be $S_{2n}$. Indeed, to see this we use the following result from lecture notes of Keith Conrad (and Bertrand's postulate!):



http://www.math.uconn.edu/~kconrad/blurbs/galoistheory/galoisSnAn.pdf



Theorem: For $n geq 2$, a transitive subgroup of $S_n$ which contains a transposition and a $p$-cycle for some prime $p > frac{n}{2}$ is $S_n$.



The condition at infinity means that $mathbb{Q}[t]/(f(t))$ is totally complex, hence so is its splitting field. To ensure that $f$ is irreducible, we may apply Krasner's Lemma again and take its coefficients sufficiently close to those of an irreducible degree $2n$ polynomial over $mathbb{Q}_p$ (for a different $p$ from those used thus far) so as to be irreducible over $mathbb{Q}_p$, which implies irreducibility over $mathbb{Q}$.



This can in principle be made explicit, but I might search the literature for a known classical family of polynomials doing what you want before I tried to carry out this construction explicitly.