Tuesday, 31 December 2013

rt.representation theory - Nicest coset representatives of the symplectic group in the general linear group

I suppose that "nice" is very much application-dependent, but let me give it a try.



The first thing to notice is that, of course, you will not be able to find a global coset representative, since the principal bundle
$$mathrm{Sp}(2k,mathbb{C}) to mathrm{GL}(2k,mathbb{C}) to M = mathrm{GL}(2k,mathbb{C})/mathrm{Sp}(2k,mathbb{C})$$
is not trivial and hence it has no global (continous) section. So the best you can do is find a section over some $U subset M$.



The subgroup $mathrm{Sp}(2k,mathbb{C})$ is the stabilizer of a symplectic structure $Omega$ on $mathbb{C}^{2k}$. A convenient choice is
$$Omega = pmatrix{ 0 & -mathbf{1} cr mathbf{1} & 0}$$
where $mathbf{1}$ is the $ktimes k$ identity matrix.



The Lie algebra $mathfrak{sp}(2k,mathbb{C})$ consists of those $2k times 2k$ complex matrices $X$ such that $Omega X$ is symmetric. A complementary vector subspace of $mathfrak{sp}(2k,mathbb{C})$ in $mathfrak{gl}(2k,mathbb{C})$ is given by
$$mathfrak{sp}(2k,mathbb{C})^perp := biglbrace Omega X mid X in mathfrak{so}(2k,mathbb{C})bigrbrace$$



Explicitly and for our choice of $Omega$ above, this subspace consists of the matrices of the form
$$pmatrix{B^t & C cr A & B}$$
for $ktimes k$ matrices $A,B,C$ with $A$ and $C$ skewsymmetric.



You can now exponentiate these matrices to find a coset representative. Depending on the calculation, though, you might it easier to write the coset representative as a product of exponentials,...

ct.category theory - "Linked List" puzzle

Suppose $E$ is a topos, and consider the operations $0,1,+,times$ (denoting the initial and terminal objects, the coproduct, and the product), and recall that $E$ satisfies the usual arithmetic laws, such as the distributive law.



For the unfamiliar, one should think of objects in $E$ as sets, but of course they don't have to be. An "element" of a "set" $A$ means a map $Brightarrow A$ for some object $B$. If $B=1$, we say the element $1rightarrow A$ is a unit element of $A$, and you can think of these instead if you prefer, but note that the "generalized" elements $Brightarrow A$ of an object $A$ completely define $A$, whereas the unit ones don't.



Suppose $A$ is an object of the topos $E$. A "linked list of type A" is a new object $L$ of the topos in which an element is either the word "null" (meaning empty list) or a pair $(a,l)$ where $a$ is an element of $A$ and $l$ is an element of $L$. Equationally:



(*) L=1+AxL.



By this point, you should understand these things. Ok, now I'm going to tell you the weird puzzle. Suppose we want to "solve" for $L$, so that we can see what it really is. To do this, I'm going to cheat. In a topos, there is no such thing as subtraction nor division.



$$L=1+Atimes L$$
$$L-(Atimes L)=1$$
$$Ltimes(1-A)=1$$
$$L=1/(1-A)$$
$$L=1 + A + A^2 + A^3 + cdots$$



So a linked list of type $A$ is "either the empty list, or an element of $A$, or an ordered pair of elements of $A$, or a triple in $A$, etc."



This faulty computation has led to the correct answer. Once this answer is found, one can check topos-theoretically that it is correct (although note that toposes aren't guaranteed to have infinite coproducts). My question is:



Q: where is this computation actually taking place?



Clearly, as the topos of finite sets is the background for the usual arithmetic of natural numbers, it stands to reason that one would generalize toposes as natural numbers were generalized to rational ones. Has or can this be done? Can this calculation make sense in some appropriate context?

graph theory - Degree constrained edge partitioning

I'm not sure about published references to this specific problem, but I'm pretty sure it can be solved in polynomial time via a reduction to minimum weight perfect matching, as follows.



Replace each vertex of degree $d$ by a complete bipartite graph $K_{d-2,d}$ where each of the original edges incident to the vertex becomes incident to one of the vertices on the $d$-vertex side of the bipartition. Set all edge weights in this complete bipartite graph to zero. Next, add a complete graph (with zero edge weights) connecting all of the vertices in the whole graph that are on the $(d-2)$-vertex sides of their bipartitions.



For each vertex $v$ in the original graph, a perfect matching in this modified graph has to include at least two of the original graph's edges incident to $v$, because at least two of the vertices on the $d$-vertex side of the complete bipartite graph that replaces $v$ are not matched within that complete bipartite graph. Because all the other edges have cost zero, the cost of the perfect matching is the same as the cost of the solution it leads to in the original graph.



On the other hand, whenever one has a subgraph of the original graph that includes at least two edges at each vertex, it can be completed to a perfect matching at no extra cost by matching the remaining vertices on the $d$-vertex sides of their bipartition to vertices on the $(d-2)$-vertex sides, and then using the complete graph edges to complete the matching among any remaining unused vertices on the $(d-2)$-vertex sides.



Therefore, the cost of the minimum weight perfect matching in this graph is the same as the cost of the optimal solution to your problem.



Added later: something like this appears to be in "An efficient reduction technique for degree-constrained subgraph and bidirected network flow problems", Hal Gabow, STOC 1983, doi:10.1145/800061.808776

Monday, 30 December 2013

photography - What causes the bright white area in this photo of the earth?

Taken on March 28 2012 by André Kuipers, the version here is labeled




Amazing aurora lights and atmospheric glare with the countries Ireland, Netherlands, Scotland, England and Wales in the foreground.




Off to the north and east like that, it's likely refracted early sunrise. That's a lot of streetlights for that time of the day; likely a longish exposure to capture aurora. That'd increase sunrise glare as well.

knot theory - Resources for graphical languages / Penrose notation / Feynman diagrams / birdtracks?

I've been working on a GUI for typesetting tensor/monoidal diagrams in TikZ.



http://tikzit.sourceforge.net/



It's especially geared at applications to quantum mechanics, namely "dot"-style diagrams of Frobenius algebras for complementary observables (Coecke, Duncan, arXiv:0906.4725) and entangled states (Coecke, me, arXiv:1002.2540).



Given Dave's already quite extensive list of what's out there on the monoidal side of things, I can only really refine what he's said.



Bob Coecke's short book (or long paper :-P) "Categories for the Practising Physicist" gives a pretty gentle buildup from physical principals, through Dirac notation for QM, to graphical notation, explaining some of the intuitions along the way.



http://web.comlab.ox.ac.uk/people/Bob.Coecke/ctfwp1_final.pdf



I found Ross Street's slides on Frobenius algebras to be a quick and easy (though sketchy) intro to the topic:



http://www.maths.mq.edu.au/~street/FAMC.pdf



The contemporary paper (Street. Frobenius monads and pseudomonoids. J. Math. Phys. (2004) vol. 45 (10) p. 3930) is very good, but considerably more technical, as it works in the language of higher categories.



** edit



I just realised that John's paper, "A Prehistory of n-Categorical Physics" hasn't been mentioned. This one puts the whole monoidal/graphical physics thing in a historical context starting from Maxwell and going through Feynman, Penrose, Mac Lane, Joyal, and all the other usual suspects. This is a long one, but it seems quite comprehensive.

Sunday, 29 December 2013

soft question - In studying maths, do you tend to go through books one at a time?

The choice of subject and level influences the answer, I think.



If you want to learn general topology [as in a thread in meta-MO, I claim this is the same thing as point-set topology but sounds less old-fashioned] from scratch, then yes, I think it is preferable to pick up a good book -- e.g. Munkres, Kelley, Willard [not Bourbaki, IMHO] -- and work steadily through it.



However, if you go farther in general topology, it does become beneficial to compare different sources. (I think the word "passively" in Case 2 above is put there to make this case sound bad. Comparing different treatments of the same subject and trying to figure out whether they are really different is a quite active process.) I myself decided a couple of years ago that I wanted to revisit general topology (which I hadn't thought about since I was a 19 year-old undergraduate), and it has been very helpful to me to compare different sources. For instance, in my study of convergence I was quite baffled by the fact that any one book I looked at took a side on "nets versus filters" and then vaguely indicated that whichever one they didn't choose resulted in an equivalent theory. Only by comparing several different sources (and some research articles) was I able to figure out what was going on to my satisfaction: see Section 6 of



http://math.uga.edu/~pete/convergence.pdf



for what I learned.



For a different subject, flipping around might be a better approach from the start. Indeed you might not have a choice: as you go on in your study of mathematics, you find that it is very often the case that there is no unique text that is squarely focused on what you want to know (the bright side of this is that it is very exciting when a text comes out serving this purpose whereas previously there was none, e.g. Silverman's Arithmetic of Elliptic Curves).



Of course I agree that to really learn something you have to spend some time exploring it linearly. E.g., in order to internalize (even moderately) complicated definitions, you need to work out some proofs in which these definitions appear. Flipping around for comparison is not going to help you if you don't already have some sense of what you're reading.

ra.rings and algebras - When are the units of R[x] exactly the units of R?

If $R$ is a commutative ring, then by the following result, the answer is "if and only if $R$ is reduced."




If $R$ is a commutative ring, then $a_0+a_1x+cdots + a_nx^nin R[x]$ is a unit if and only if $a_0$ a unit in $R$ and $a_i$ is nilpotent for $i>0$.




Proof. One direction is easy. Any polynomial of the given form is a unit because the sum of a unit and a nilpotent element is always a unit.



The other direction isn't too hard if $R$ is a domain (the product of non-zero elements is always non-zero). If $g=b_0+cdots b_mx^m$ (with $b_mneq 0$) is the inverse of $f=a_0+cdots+a_nx^n$ (with $a_nneq 0$), then the highest order term of $1=fcdot g$ is $a_nb_mx^{n+m}$, so we must have $n=m=0$ and $a_0$ invertible (with inverse $b_0$)



For the general case, suppose $a_0+cdots +a_nx^n$ is a unit. Reducing modulo $x$, we must get a unit in $R[x]/(x)cong R$, so $a_0$ must be a unit. Reducing modulo any prime $mathfrak psubseteq R$, we get a unit in $(R/mathfrak p)[x]$. Since $R/mathfrak p$ is a domain, the previous paragraph shows that $a_iin mathfrak p$ for all $i>0$ and all primes $mathfrak p$. Since the intersection of all primes is the nilradical, each $a_i$ must be nilpotent.




A more "bare hands" elementary proof is given in Ex. 1.32 of Lam's Exercises in Classical Ring Theory. He also gives counterexamples to both implications if $R$ is not assumed commutative and mentions a really interesting related question. If $Isubseteq R$ is an ideal all of whose elements are nilpotent and $a_iin I$, then does it follow that $1+a_1x+cdots +a_nx^n$ is a unit in $R[x]$? If you can prove that it does, it would imply the Köthe conjecture, a famous problem in ring theory.

Saturday, 28 December 2013

soft question - What are good non-English languages for mathematicians to know?

Just a comment, there are thousands of excellent Chinese papers written in journals which never get translated to english. Being able to read both is definitely good if you are willing to put work into researching.



[EDIT: Douglas S. Stones] I've been learning Chinese for roughly three-and-a-half years now and I feel it has been given an unfair treatment so far in this question. So I will add a few reasons why I found learning Chinese has been valuable as an early mathematician.



a. There are many talented Chinese researchers that are looking for international collaborations (here's my first: http://portal.acm.org/citation.cfm?id=1734895).



b. Merely restating a result (that has been published only in Chinese) adds something to a paper that others can't easily match.



c. I can pronounce Chinese names at conferences without sounding like a goose to the Chinese speakers in the audience (e.g. "Wang"). These are some of the most common surnames on the planet.



d. In learning Chinese, you make your knowledge work for you -- you actively find papers in Chinese, etc.



e. There was an argument about traditional vs. simplified Chinese, i.e. not being standardised. It's pretty easy to switch between traditional and simplified (and pinyin) on a computer (I admit this becomes more complicated if you only have a hard copy or a scanned copy, but there are ways such as OCR or simply drawing the unknown character into appropriate software). For many characters, the traditional and simplified characters are quite similar. [Side note: If I redefined "English", "German", etc. to be "European language 1", "European language 2", etc. then "European" would not be standardised. Why would you want to learn "European language n"?]



f. There are native Chinese speakers everywhere!



That being said, learning Chinese (as with any language) requires a certain temperament and a long-term commitment. But it is easier now than it has ever been in the past thanks to online learning tools. My favourites are ChinesePod, Skritter, dict.cn, DimSum Chinese Reading Assistant (but there are plenty of others).

dg.differential geometry - Why should I prefer bundles to (surjective) submersions?

I hope this question isn't too open-ended for MO --- it's not my favorite type of question, but I do think there could be a good answer. I will happily CW the question if commenters want, but I also want answerers to pick up points for good answers, so...



Let $X,Y$ be smooth manifolds. A smooth map $f: Y to X$ is a bundle if there exists a smooth manifold $F$ and a covering $U_i$ of $X$ such that for each $U_i$, there is a diffeomorphism $phi_i : Ftimes U_i oversetsimto f^{-1}(U_i)$ that intertwines the projections to $U_i$. This isn't my favorite type of definition, because it demands existence of structure without any uniqueness, but I don't want to define $F,U_i,phi_i$ as part of the data of the bundle, as then I'd have the wrong notion of morphism of bundles.



A definition I'm much happier with is of a submersion $f: Y to X$, which is a smooth map such that for each $yin Y$, the differential map ${rm d}f|_y : {rm T}_y Y to {rm T}_{f(y)}X$ is surjective. I'm under the impression that submersions have all sorts of nice properties. For example, preimages of points are embedded submanifolds (maybe preimages of embedded submanifolds are embedded submanifolds?).



So, I know various ways that submersions are nice. Any bundle is in particular a submersion, and the converse is true for proper submersions (a map is proper if the preimage of any compact set is compact), but of course in general there are many submersions that are not bundles (take any open subset of $mathbb R^n$, for example, and project to a coordinate $mathbb R^m$ with $mleq n$). But in the work I've done, I haven't ever really needed more from a bundle than that it be a submersion. Then again, I tend to do very local things, thinking about formal neighborhoods of points and the like.



So, I'm wondering for some applications where I really need to use a bundle --- where some important fact is not true for general submersions (or, surjective submersions with connected fibers, say).

Friday, 27 December 2013

If an object with mass were to somehow go the speed of light, would it destroy the whole universe?


Would an object with mass traveling the speed of light destroy the whole universe because it would have infinite energy / mass?




If we understand the question as a limiting process, which is the only way it makes any sense, the answer is no. For illustrative simplicity, take a spherically symmetric isolated body, so that its exterior gravitational field is the Schwarzschild spacetime. Now boost this to ultrarelativistic speeds. In the speed of light limit, the result is a linearly polarized axisymmetric gravitational pp-wave, the Aichelburg-Sexl ultraboost.



Its effect on test particles it passes is an impulse: it instantaneously bends the worldlines, but does not destroy anything. You can think of this as infinite force acting for an infinitesimally small time, making the overall effect finite and nondestructive, somewhat analogously a particle encountering a Dirac-delta potential in Newtonian physics.



This is similar to ultrarelativistic limit of a moving electric charge. The electric field lines are Lorentz-contracted along the direction of travel, squeezing them together (image credit: wikipedia):



Stationary charge electric fieldMoving charge electric field



In the ultrarelativistic limit, the electric field in the transeverse direction becomes infinitely strong, but it is also Lorentz-contracted to be infinitely thin. Without ignoring the magnetic field, the result is an electromagnetic plane wave with a Dirac delta profile. The gravitational case is analogous, though quantitatively different. Some details on this can be found in arXiv:gr-qc/0110032.




My premise behind it possibly destroying the whole universe is that an object with infinite energy should also have infinite mass (E=mc^2, and infinity / c^2 is still infinity).




The concept of 'relativistic mass' is largely depreciated in modern physics. Mass is more properly related to energy and momentum via $(mc^2)^2 = E^2 - (pc)^2$.




One thought is that it could just form a black hole instead, ...




No, that's completely wrong. It doesn't become a black hole for the same reason it doesn't destroy the universe: the gravitating body moving relativistically past a stationary observer is physically equivalent to a relativistic observer moving past a stationary gravitating body. But the observer moving relativistically obviously does not nothing to the gravitating body!

Why is liquid water considered a requirement for life?

Water molecules are bipolar, a bit like a bar magnet. For this reason they form non-covalent bonds with other molecules. This makes both the breaking up and making of strong covalent chemical bonds easier. Water is not only a fluid in the meaning that things float in it. Water also increases the chemical turnover and this is important for biological processes where different molecules need to find each other.



The bipolarity of water helps make the right amino acids find their place along an RNA molecule, simply by facilitating more tests per millisecond until the right one comes along and sticks. Methane, for example, is not a bipolar molecule. It does not facilitate chemical recombinations in the way water does. Life would have a hard time to originate and survive in liquid methane. There are of course other bipolar volatile molecules than water.



I don't really know what I'm talking about here, but I heard someone who seems to do so... And animations like this help my impression that bipolarity is important in order to make the right stuff come together sooner rather than later.

ancient history - Did the Tamil People discover that the earth was round 2000 years ago?

The logic in the question is wrong, a Tamil poet mentioned about spherical shape of earth, so Tamil people invented that? The only true assumption you can be sure about from this poem is that Tamil people would have known about this fact 2000 years ago.



As Tamilnadu is a very small region in India there are chances that some others outside tamilnadu in India found this and the poet used the term long after it was invented. There are a lot of old languages in India. Tamil is one of them of course(as pointed out in the comments). But there are languages like Sanskrit which we need to consider. There are poems ,stories and puranas in that language too. Some of them are also mentioning about the spherical shape of earth.



For example, in Srimad Bhagavada(a holy book of Hindus), there is clear saying about the shape of earth. It is called 'BHUGOLA'(BHU means earth, GOLA means sphere in Sanskrit). There is a separate chapter for describing different planets including earth. This is surely older than the mentioned poem 'Thiruvasakam'. There are chances that Tamil people could have invented this, but the chances of other Indian people are also equal in finding this.
Also in Hindu puranas, the Earth is considered as the mother of humans and a goddess, and the people who wrote these puranas clearly knew that earth was spherical.



This google search shows result for the specific chapter about earth, see here



It's just an example, there are many other places where it is mentioned. Please keep in mind that these books are not specifically talking about the shape of the earth, it is used like a usual term in some places. They are not claiming anything about who invented things...



Also FYI, please have a look at these facts too which can not be found using the common sense.



Two thousand years before Pythagoras, philosophers in northern India had understood that gravitation held the solar system together, and that therefore the sun, the most massive object, had to be at its center.



Twenty-four centuries before Isaac Newton, the Hindu Rig-Veda asserted that gravitation held the universe together.



The Sanskrit speaking Aryans subscribed to the idea of a spherical earth in an era when the Greeks believed in a flat one. The Indians of the fifth century A.D. calculated the age of the earth as 4.3 billion years; scientists in 19th century England were convinced it was 100 million years. Many questions, still to be asked.



Read about the book Srimad Bhagavata, http://en.wikipedia.org/wiki/Bhagavata_Purana
For basic additional info about other assertions: http://en.wikiquote.org/wiki/Vedic_science and simple google search

gn.general topology - Can topologies induce a metric?

Let {X,T} be a topology, T the set of open subsets of X.




Definition: Three points x, y, z of X are in relation N (Nxyz, read "x is nearer to y than to z") iff



  1. there is a basis B of T and b in B such that x and y are in b but z is not and

  2. there is no basis C of T and c in C such that x and z are in c but not y.


For some topologies there are no points x, y, z in relation N, for example if T = {Ø,X} or T = P(X), but for others there are (e.g. for ones induced by a metric [my claim]).




Definition: A topology has property M1 iff



(x)(y) ((z) (z ≠ x & z ≠ y) → Nxyz) → x = y



(This is an analogue of d(xy) = 0 → x = y, the best one I can imagine).




Definition: A topology has property M2 iff



(x)(y)(z) Nxyz & Nyzx → Nzyx



(This is a kind of an analogue of d(xy) = d(yx), the best one I can imagine)




First (bunch of) question(s):



  1. Properties M1 and M2 do not capture the whole of the corresponding conditions of a metric. Can anyone figure out "better" definitions (e.g. an analogon of x = y → d(xy) = 0)?


  2. Can anyone figure out a property M3 that is an analogue of the triangle equality?


If it can be shown that no such property M3 is definable, the following becomes obsolete.



If such a definition can be made, we define:




Definition: A topology has property M (read "induces a metric") iff it has properties M1, M2, M3.




Second question:



Which topologies have property M, i.e. induce a metric? Are these "accidentally" exactly those that are induced by a metric?

redshift - Is Blue Shift just as provable as Red Shift?

The expansion of the Universe can be expressed using Hubble's law. This is that the recession velocity that a galaxy has with respect to our own is proportional to its distance from us. i.e.
$$ v = H_0 D, $$
where $v$ is the recession velocity, $D$ is the distance and H_0 is the Hubble "constant", which has a value of about 70 km/s per Megaparcsec.



Hubble's law is attributed tot he overall expansion of the universe and means that once we get further away than about 10 Mpc (or about 30 million light years), that other galaxies recede from us with velocities of 700 km/s or more. This recession produces a redshift.



However, superimposed on Hubble's law at a more local scale, individual galaxies are gravitationally interacting with each other and hence have their own peculiar motions with respect to the general "Hubble flow". In addition, don't forget our Sun rotates around the centre of our Galaxy with a velocity of some 230 km/s, so is heading in the direction of some galaxies and away from others.
These local peculiar motions means that some nearby galaxies are moving towards us (at relatively modest speeds) and do exhibit a blue shift. An example would be the Andromeda galaxy at 2 million light years, which is approaching our Galaxy at about 110 km/s (or at a relative closing speed of 300 km/s with respect to the Sun).



But there are only about 50 galaxies in our local group and a further few tens of galaxies in some other structures out to about 30 million light years, and it is only from this small sample that any can exhibit blueshifts. The vast majority of known galaxies are further way and are swept along in the general universal expansion and hence away from us.

How can we detect water on Mars-like exoplanets?

It would be difficult to detect the water-by-weight in the dust of another planet to the precision that has been done for the Martian soil.



However, having said that, there are a couple of ways to detect the presence of water, and that would be, according to "60 Billion Alien Planets Could Support Life, Study Suggests" (Gannon, 2013), to detect the presence of clouds.



To confirm that the clouds are water clouds, not something the iron-vapour etc, according to web article "Hunting for Water on ExoPlanets" (bolding mine),




Using ESO’s Very Large Telescope (VLT), a team of astronomers have been able to detect the telltale spectral fingerprint of water molecules in the atmosphere of a planet in orbit around another star. The discovery endorses a new technique that will let astronomers efficiently search for water on hundreds of worlds without the need for space-based telescopes.


the moon - Is there a pattern between the mass of a body and the mass of orbiting objects around it?

I was looking at Wikipedia's Solar system page, and it says that Sun represents 99.86% of the whole solar system mass. I found that pretty huge.



So i calculated ratio of masses : Earth / (Earth + Moon) and it's about 98.78%.



I did the same with Jupiter : Jupiter / (Jupiter + Io + Europa + Ganymede + Callisto) and it's about 99.97% (I ignored small satellites).



  • Why is it such a high ratio in this 3 examples?

  • Is 1% a regular number for satellites' mass?

  • It is simple gravitation maths ?

I understand that one body has to be much more massive in order to be a planet-satellite system, otherwise it's a double planet system. But I would have thought 90% would be enough.

Thursday, 26 December 2013

orbit - Where did this famous Planetary Precession Formula come from?

The following equation (which I shall term the Planetary Precession Formula, PPF for short) famously appeared in a 1915 publication by Einstein where he indicated how it could be derived from his General Theory of Relativity (GTR).



$$epsilon = frac{24 , pi^3a^2}{c^2 T^2(1-e^2)}$$



where $epsilon$ is the (anomalous, non-Newtonian) angular precession per orbit, $a$ is the orbit semi-major axis, $c$ is the speed of light, $T$ is the orbital period, $e$ is the orbit ellipticity.



The PPF formula accurately predicts the (anomalous, non-Newtonian) precession of Mercury and other Solar planets.



The formula was known in scientific circles well before 1915. For example Gerber (1898) derived it from his own (widely-derided) model of gravity. In the internet article Gerber's Gravity it is written that




It became a fairly popular activity in the 1890s for physicists to propose various gravitational potentials based on finite propagation speed in order to account for some or all of Mercury's orbital precession. Oppenheim published a review of these proposals in 1895. The typical result of such proposals is a predicted non-Newtonian advance of orbital perihelia per revolution of... >




$$k,frac {pi,m}{L ,c^2} = k frac{4 , pi^3a^2}{c^2 T^2(1-e^2)}.$$



where $L = a(1 - e^2)$ is the semi-latus rectum of an ellipse, $m$ is a function of the angular speed $omega$ of an orbiting planet: $m = a^3 omega^2$ with $omega = 2pi/T$ and $k$ is a constant which might be derived from theory.



Clearly with $k = 6$ we get the PPF formula given above.



I wish to know where the $kpi m/Lc^2$ expression comes from. From the article it would appear to come from the 28-page review paper by Oppenheim, 1895 which is scanned here. I have been through the scans of this paper but without finding that equation explicitly (the paper is in German which I know very poorly, Google Translate helps a bit but leaves a lot of ambiguity). It might be that the anonymous author of the article extracted the expression from a review of Oppenheim's paper or even the original (French & German) papers themselves, but he is not contactable. Maybe someone here is familiar with this era of astrophysical history and can point me in the right direction?

computational complexity - Does $EXPneq ZPP$ implies sub-exponential simulation of BPP or NP?

By simulation I mean in the Impaglazzio-Widgerson [IW98] sense, i.e sub-exponential deterministic simulation which appears correct i.o to every efficient adversary.



I think this is a proof:
if $EXPneq BPP$ then from [IW98] we get that BPP has such simulation.
otherwise we have that $EXP=BPP$ which implies $RP=NP$ and $EXP in PH$
now if $NP=RP=ZPP$ we have that $PH$ collapse to $ZPP$ and as a result $EXP$, but this cannot happen because of the assumption, so $RPneq ZPP$ and this by Kabanets paper "easiness assumptions and hardness tests: trading time for zero error" implies that RP has such simulation and as a result also NP.



This sound like a basic result,
anyone knows if it appears anywhere?

lo.logic - Is there any proof assistant based on first-order logic?

Isabelle supports many different logics, and it has a formulation of first order logic which you may browse here: http://isabelle.in.tum.de/dist/library/FOL/index.html. However, even though proofs are natural deduction in flavor, it does not produce anything a logician would understand as a natural deduction derivation upon shallow inspection.



The automated theorem provers Prover9, E, SPASS and Vampire are all first order systems. They do not produce proofs using natural deduction (they are all typically resolution/paramodulation based systems).



It sounds like ProofWeb is exactly like what you want. It provides a system for displaying the accompanying natural deduction/sequent calculus proof along with a computer assisted formalization. It also has a really nice interactive interface for students, and provides the possibility of assigning exercises. On the other hand, I know that it has been largely developed for Coq, which is way, way more expressive than first order logic. And even though I know that there is a development of set theory within Coq, I suspect modifying the system for basic set theory would be a nontrivial exercise.

Wednesday, 25 December 2013

solar system - Full Moon Position of Sun

I am trying to figure out the position of sun with respect to moon on a full moon day.



Here's an image from certain wiki which I do not find convincing.
enter image description here1



If the sun rays point as shown in the image and the earth stands between the moon and sun on Full moon's day, I dont see any reason why the moon should be shining in its full glory.



Similarly when the moon is directly in between earth and sun, it should have maximum brightness but the image shows it as the new moon.



In fact, I cannot deduce how at all can moon receive light across the entire sphere at any given angle from the sun.



Putting it specifically, what would the position of sun w.r.t to earth and moon on a full moon day ?

sheaf theory - Definition of sheaves in wikipedia

I think you're quite right; Wikipedia's "concrete definition" is only correct for concrete categories whose underlying-set functor is (not just faithful but) conservative, i.e. such that any morphism which is a bijection on underlying sets is an isomorphism in the category. The page does say that the concrete definition "applies to the most common examples such as sheaves of sets, abelian groups and rings," all of which have this property, but it ought to be fixed to make clear in exactly what situations this definition applies.



Secondly, I observe that the "normalisation" condition in the Wikipedia concrete definition is also odd. Since the empty set is covered by the empty family, the "local identity" and "gluing" conditions already imply that the underlying set of $F(emptyset)$ is terminal. Saying that in addition, $F(emptyset)$ itself is terminal is an additional condition, which is in fact a special case of the second, more generally applicable, definition.



Thirdly, I think you're also right that for the correct general definition, the category doesn't need to have any limits a priori; you can just assert that $F(U)$ is the limit of the appropriate diagram of the $F(U_i)$ and $F(U_icap U_j)$.



Finally, let me go out on a limb and say that it seems to me that defining "sheaves with values in an arbitrary category" is often a misguided thing to do. More often, it seems like rather than "a sheaf with values in the category of X," the important notion is "an internal X in the category of sheaves of sets." For familiar cases such as groups, abelian groups, rings, small categories—in fact, for any finite limit theory—the two are the same, which may be what leads to the confusion. But the good notion of "sheaf of local rings," for instance, is not a sheaf with values in the category of local rings, but rather a sheaf of rings whose stalks are local (at least, when there are enough points), and that's the same as an internal local ring in the category of sheaves of sets. The situation is similar, I think, for "sheaves of topological spaces" (or locales). I'd be happy for people to point out where I'm wrong about this, though.

space probe - Is Rosetta changing the theory of comets?

Rosetta is now arriving at Comet 67P.



Other comets have been analyzed to some degree (Giotto, Deep Space 1, Deep Impact, etc.)



So my question is... where I can study the up-to-date theory of comets? Much of what was previously thought (eg. dirty snowball) has been discarded now, but I haven't found a good source describing the current thinking.



Are there any firm predictions that will be tested as Rosetta studies the comet up close?

computational complexity - Collapsing of exptime and alternation bounded turing machine


Let suppose that $ATIME(C;j)=ATIME(C;j+1)$ then can we prove that $forall~k>j$, $ATIME(C;j)=ATIME(C;k)$? Or at least, what are the condition over $C$?




Yes, when $C$ is the class of polynomial functions of $n$, or the class of linear functions of $n$. First, note that your $ATIME(C;j)$ is typically written as $Sigma_j TIME[C(n)]$. I will use this notation as it is more standard. Moreover it distinguishes between the classes where the machine starts in a universal state instead of an existential one. The universal version is written as $Pi_j TIME[C(n)]$.



Now, $Sigma_{j+1} TIME[C(n)]=Sigma_j TIME[C(n)]$ implies $forall~k>j$, $Sigma_k TIME[C(n)]=Sigma_j TIME[C(n)]$, when $C(n)$ is the class of all polynomial functions of $n$, or the class of linear functions of $n$. In fact, you can get away with the weaker assumption $Pi_j TIME[C(n)] = Sigma_j TIME[C(n)]$ in place of $Sigma_{j+1} TIME[C(n)]=Sigma_j TIME[C(n)]$. This follows from standard stuff in the chapters on alternations and the polynomial hierarchy of any complexity theory book. (You probably are well aware of this, but other readers may not be. So please bear with me.) For example, one result you may see in a complexity course is $NP = coNP$ implies that the polynomial hierarchy collapses to $NP$. This is exactly the same as saying $Pi_1 TIME[n^{O(1)}] = Sigma_1 TIME[n^{O(1)}]$ implies $bigcup_{k geq 1} Sigma_k TIME[n^{O(1)}] = Sigma_1 TIME[n^{O(1)}]$, which is the case $j=1$ in your question.



For superpolynomial functions $C(n)$, one runs into trouble. Suppose $Sigma_j TIME[2^{O(n)}] = Sigma_{j+1} TIME[2^{O(n)}]$, and you want to show $Sigma_{j+2}TIME[2^{O(n)}] subseteq Sigma_{j+1} TIME[2^{O(n)}]$. The usual way of doing this is to take a $Sigma_{j+2}$ machine, and consider the language $L'$ of pairs ${(x,y)}$ with the property that, if I feed $x$ to the machine, and substitute the string $y$ in place of the guesses for the first existential mode, the remaining $Pi_{j+1}$ computation accepts. $L'$ is in $Pi_{j+1}$, and so you usually apply your assumption to conclude $L'$ is in $Pi_j$, hence the whole computation is in $Sigma_{j+1}$. But observe that this "remaining $Pi_{j+1}$ computation" runs in polynomial time in the length of the input, since the string $y$ can be of length $2^{O(n)}$, and the remaining computation takes $2^{O(n)}$ time. So it appears you need an assumption about polynomial time alternating computation in order to get this collapse. If you apply $Sigma_j TIME[2^{O(n)}] = Sigma_{j+1} TIME[2^{O(n)}]$ you end up with a doubly-exponential time computation.



I don't think any alternative argument for this kind of collapse is known, which gives you what you want. If you find one, please tell me! It may have applications to separating complexity classes.

amateur observing - What can be seen with a 4.5" telescope

The aperture of your 4.5" telescope is one thing, it's also important what focal length you have. Is it a f/5 or rather a f/8? The f/8 would be suitable for viewing the Moon, Jupiter, Saturn, maybe even Mars and Venus. You can also buy a good solar filter, attach it in the front of the optical tube assembly, and view the sun. But be careful with that, and inform yourself beforehand! Your eyesight might be in danger otherwise. For the Moon it is advisable to buy a neutral grey filter, because it is also very, very bright.



As for the Messier objects: you can probably view the larger ones. The Ring nebula (M57) will be tough, because it is very small and faint. But open clusters, globular clusters and Andromeda's core will be possible. A few bright nebulas will also be possible, like M42 (Orion Nebula).



There is a good list of all Messier objects over at Astropixels. You should try objects of magnitude 4 or better, and that have a size of more than 5 arcminutes. Otherwise they may be too dim or small to find.

Tuesday, 24 December 2013

gn.general topology - topologically homogeneous space?

A homogeneous continuum is a compact connected metric space X such that for any two points x,y there is a homeomorphism of X taking x to y. This obviously implies that X is locally the same everywhere (a priori, it is a stronger condition). There are plenty of examples in books on general topology. My favorite one is a solenoid, which is not a manifold because, for example, it is not locally connected.



ADDENDUM The Menger curve C (also known as the Menger sponge, Menger universal curve, and Sierpinski universal curve) is a one-dimensional locally connected continuum. R.D. Anderson proved a characterization which implies that C is n-point homogeneous and that, moreover, up to a homeomorphism, the circle and C are the only one-dimensional homogeneous locally connected continua.



Anderson, R. D. A characterization of the universal curve and a proof of its homogeneity. Ann. of Math. (2) 67 1958 313-324 MR



Anderson, R. D. One-dimensional continuous curves and a homogeneity theorem.
Ann. of Math. (2) 68 1958 1-16 MR



By the way, I am not a general topologist: all information can be easily found using web searches starting with "homogeneous continuum".

Monday, 23 December 2013

cohomology - Groupoid of moves on trivalent fatgraph

Let $T$ be a finite trivalent fatgraph - i.e. a graph with a cyclic order of the edges at each vertex. Then there are certain basic "moves" we can perform on $T$: an embedded edge can be collapsed and then uncollapsed in a different way (a "rotation", or "2-2 move"), or the circular order of the three edges incident at a vertex can be reversed (a "flip").



Define a set $mathcal{T}$ whose elements are trivalent fatgraphs $T'$ homotopic to $T$ with a labeling of the edges from $1$ to $n$ and a labeling of the vertices from $1$ to $m$ (note $m = 2n/3$). A "move" is a pair $(T,c)$ where $T in mathcal{T}$, and $c$ is an element of the set $e1, e2, cdots, en, v1, v2, cdots, vm$. The move acts on the labeled fatgraph $T$, and turns it into a new labeled fatgraph $T'$ obtained from $T$ by performing a rotation on edge $ei$ if $c=ei$ or a flip on vertex $vi$ if $c=vi$. It is clear how a flip affects the labels (it doesn't). A rotation destroys one edge labeled by $ei$ and creates a new edge, so label this new edge $ei$.



Now define a marked fatgraph to be a labeled fatgraph (i.e. an element of $mathcal{T}$) together with a homotopy class of homotopy equivalence to some fixed $K(pi_1(T),1)$. The moves defined above generate a new groupoid on marked fatgraphs, by acting on the labeled fatgraph part. This groupoid - the groupoid acting on marked fatgraphs - I will denote by $V(T)$ (the notation $V(T)$ is suggested by the similarity to Thompson's group $V$).



This groupoid - or something like it - turns up in many different contexts, so as a preliminary question, it would be nice to know how it is referred to. (Or: does this construction even make sense?)



More substantially: what is known about the algebraic structure of $V(T)$? What can be said about the cohomology of its classifying space? What is the relation to the group $Out(F)$, where $F$ is the (free) fundamental group of $T$? (note that $Out(F)$ acts on marked fatgraphs in a way that commutes with $V(T)$, by acting by homotopy equivalences of the $K(pi_1(T),1)$ and thereby changing the marking). Is there a good reference?




Since the answers I am getting are not really what I am after, I think I need to make the question more pointed. A marked fatgraph determines a certain amount of algebraic structure on a free group (i.e. the fundamental group of $T$), namely a pair $(l,e)$ where $l$ is a length function, and $e$ is a bounded 2-cocycle. The first part of the data comes from the "thin" underlying graph, and is just the translation length of each element on its axis. The second part of the data comes from the fattening, and is an explicit cocycle representing the Euler class of the thickened surface. The first kind of move affects $l$, the second kind affects $e$. Crucially, both $l$ and $e$ are integer valued (this is the point of discussing discrete combinatorial objects, namely fatgraphs, instead of eg. discrete faithful representations of $F$ into $PSL(2,R))$.



Many, many papers discuss length functions, and many, many papers discuss Euler classes, but I would like to have a (presumably homological) algebraic framework which treats the two components as a single object with, presumably, more structure. The question is: what is this structure? Is it something that is already well-studied? Is there a reference?

fa.functional analysis - Fractional Fourier transform

Let $T: L^2(mathbb{R}^n) rightarrow L^2(mathbb{R}^n)$ be the Fourier transform. Is there any reasonable definition of fractional Fourier transform (i.e. operator $A$ such that $A^{alpha}=T$ for $alpha in (0,1)$), and if there is, is it of any use?

Sunday, 22 December 2013

ag.algebraic geometry - Category of copresheaves over commutative monoids

Let C be a symmetric monoidal category. Let Comm(C) be the category of commutative monoids in C. Consider the topos X = CoPSh(Comm(C)) of covariant functors from Comm(C) to the category Set of sets.



Which extra data do we have to specify on the topos X such that we can recover (up to some notion of equivalence) the essential structure of the underlying site? For example, every representable copresheaf F = Hom(A, _) has a category of modules attached to it, which is a kind of extra data.
[I know I am a bit unprecise here with what I mean by essential, so making this precise could also be part of the answer to my question.]



One could also rephrase this question as follows: Given a topos X that satisfies Giraud's axioms, one can extract a site such that X is the Grothendieck topos over this site. Which extra data do we need to impose on X such that we can recover X as a Grothendieck topos over a site that is (the dual) to commutative monoids in a symmetric monoidal category.



When I write down this question, I have the following example in mind: Let C be the category of abelian groups. Then X is the topos of presheaves on the category of affine schemes, which gives rise to algebraic geometry. X possesses a commutative ring object, namely the affine line A1 and one has the stack of categories of quasi-coherent sheaves over objects of X. Taking the idea of (Grothendieck) topoi seriously, one should be able to forget about C and just consider the topos X (i.e. without a fixed base site). Of course, one has to remember (at least) A1. This allows to recover the stack of categories of quasi-coherent modules.



Added for clarification:



But what if C is not the category of abelian groups? In this case, X = CoPSh(Comm(C)) also carries a stack QCoh of categories of quasi-coherent modules as follows: Let F be an object of X, i.e. F is a covariant functor from Comm(C) to Set. An object M of the category QCoh(F) maps a morphism a: Hom(A, _) -> F to an A-module M(a) in C together with
natural isomorphisms M(b) = M(a) ⊗A B for all morphisms a -> b in Comm(C). [It is here where the category C itself comes in.]



What is the minimal amount of data we need on X so that X is equivalent to sheaves on a site S such that the dual of S is of the form Comm(C') with C' giving rise to a somewhat equivalent stack of categories of quasi-coherent modules.

Friday, 20 December 2013

coronal mass ejection - What is the force of a CME on objects in space?

This isn't a complete answer, but it's a partial one: http://pwg.gsfc.nasa.gov/istp/outreach/sunearthmiscons.html




Despite the electromagnetic havoc they play with satellite electronics
and with Earth's magnetic field, the solar wind and CMEs barely exert
a pressure or force that you could measure. In fact, they couldn't
ruffle the hair on your head. The solar wind has fewer particles per
cubic centimeter than the best vacuums scientists have ever created on
Earth. Our own air is billions of times denser than the solar wind,
such that a cubic centimeter of air has as many particles as a cube of
solar wind measuring 10 kilometers on each side.




So, for your question on changing orbits - not very much. Granted everything moves a little under any pressure.



Now, the lighter the object is to it's surface area, the more a CME can move it, that's why a solar sail, very light weight and large area, might work. CMEs can also move gases, like the tail of a comet can be pushed visibly away from the sun (I read that, but misplaced the article), or lighter, high atmospheric gas can be blown away from smaller planets.



Also, no 2 CMEs are exactly alike. A smaller and/or a younger star will emit much stronger CMEs, so around a red dwarf for example, the effect might be more noticeable for tiny asteroids and the like.

gr.group theory - Why are the sporadic simple groups HUGE?

The question seems to be made of several smaller questions, so I'm afraid my answer may not seem entirely coherent.



I have to agree with the other posters who say that the sporadic simple groups are not really so large. For example, we humans can write down the full decimal expansions of their orders, where a priori one might think we'd have to resort to crude upper bounds using highly recursive functions. (In contrast, one could say that almost of the groups in the infinite families are too large for their orders to have a computable description that fits in the universe.) Furthermore, as of 2002 we can load matrix representatives of elements into a computer, even for the monster. Noah pointed out that the monster has a smaller order than $A_{50}$, but I think a more apt comparison is that the monster has a smaller order than even the smallest member of the infinite $E_8$ family. Of course, one could ask why $E_8$ has dimension as large as 248...



There was a more explicit question: how is it possible that a group with as many as $8 times 10^{ 53 }$ elements doesn't have any normal subgroups? I think the answer is that the order of magnitude of a group says very little about its complexity. There are prime numbers very close to the order of the monster, and there are simple cyclic groups of those orders, so you might ask yourself why that fact doesn't seem as conceptually disturbing. Perhaps slightly more challenging is the fact that there aren't any elements of order greater than 119, but again, there is work on the bounded and restricted Burnside problems that shows that you can have groups of very small exponent that are extremely complicated.



A second point regarding the large lower bound on order is that there are smaller groups that could be called sporadic, in the sense that they fit into reasonably natural (finite) combinatorial families together with the sporadics, but they aren't designated as sporadic because small-order isomorphisms get in the way. For example, the Mathieu group $M_{10}$ is the symmetry group of a certain Steiner system, much like the simple Mathieu groups, and it is an index 11 subgroup of $M_{11}$. While it isn't simple, it contains $A_6$ as an index 2 subgroup, and no one calls $A_6$ sporadic. Similarly, we describe the 20 "happy family" sporadic subquotients of the monster, but we forget about the subquotients like $A_5$, $L_2(11)$, and so on. Since the order of a nonabelian simple group is bounded below by 60, there isn't much room to maneuver before you get to 7920, a.k.a. "huge" range.



The question about the why the 2-Sylow subgroup has a certain size is rather subtle, and I think a good explanation would require delving into the structure of the classification theorem. A short answer is that centralizers of order 2 elements played a pivotal role in the classification after the Odd Order Theorem, and there was a separation into cases by structural features of centralizers. One of the cases involved a centralizer that ended up having the form $2^{1 + 24} . Co1$, which has a 2-Sylow subgroup of order $2^{46}$ (and naturally acts on a double cover of the Leech lattice). This is the case that corresponds to the monster.



Regarding the prime factorization of the order of the monster, the primes that appear are exactly the supersingular primes, and this falls into the general realm of "monstrous moonshine". I wrote a longer description of the phenomenon in reply to Ilya's question, but the question of a general conceptual explanation is still open.



I'll mention some folklore about the organization of the sporadics. There seems to be a hierarchy given by



  • level 0: subquotients of $M_{24}$ = symmetries of the Golay code

  • level 1: subquotients of $Co1$ = symmetries of the Leech lattice, mod ${ pm 1 }$

  • level 2: subquotients of the monster = conformal symmetries of the monster vertex algebra

where the groups in each level naturally act on (objects similar to) the exceptional object on the right. I don't know what explanatory significance the sequence [codes, lattices, vertex algebras] has, but there are some level-raising constructions that flesh out the analogy a bit. One interesting consequence of the existence of level 2 is that for some finite groups, the most natural (read: easiest to construct) representations are infinite dimensional, and one can reasonably argue using lattice vertex algebras that this holds for some exceptional families as well. John Duncan has some recent work constructing structured vertex superalgebras whose automorphism groups are sporadic simple groups outside the happy family.



I think one interesting question that has not been suggested by other responses (and may be too open-ended for MO) is why the monster has no small representations. There are no faithful permutation representations of degree less than $9 times 10^{ 19 }$ and there are no faithful linear representations of dimension less than 196882. Compare this with the cases of the numerically larger groups $A_{50}$ and $E_8(mathbb{F}_2)$, where we have linear representations of dimension 49 and 248. This is a different sense of hugeness than in the original question, but one that strongly impacts the computational feasibility of attacking many questions.

Thursday, 19 December 2013

at.algebraic topology - Original references for the homotopy groups pi_5 of SU(3) and pi_4 of SU(2)?

For revision of a paper (http://arxiv.org/abs/1008.1189), I'd like to
correct my references to the original work on aspects of the homotopy
groups pi_5 of SU(3) and pi_4 of SU(2). I'm not a mathematician. I
can barely read the introductions of math papers. So I'd appreciate
advice.
I realize that mathematicians don't bother with the original
references for such things. I'm just being a bit compulsive.



There are 4 points I'd like to reference correctly.



(1) pi_5 of SU(3) = Z



For this, I referenced Beno Eckmann's thesis:
B. Eckmann. Zur Homotopietheorie Gefaserter Raume. Comm. Math. Helv., 14:141–192, 1942.



The thread that led to this reference was:
J. Diedonne, A history of algebraic and differential topology. 1900--1960, MR995842.
page 411.
A. Borel, Collected Papers Volume 1, page 426
B. Eckmann, Espaces fibres et homotopie, Colloque de topologie
(espaces fibres), Bruxelles, 1950, MR0042705
I couldn't find a copy of the last. But I did find
B. Eckmann, Mathematical survey lectures 1943--2004, MR2269092.
which contained
B. Eckmann, "Is Algebraic Topology a Respectable Field", page 255:
'It was known in the thesis of the author already (1942) that
the homotopy groups pi_i U(n) are constant for nge (i+2)/2 for
even i and (i+1)/2 for odd i: these "stable" groups were known to
be 0 for i=0,2,4 and =Z for all odd i.'
Eckmann's thesis is available on-line from Comm.Math.Helv. As far as
I can tell given my limited ability to decipher topology written in
German, it does contain this result. I have no idea if the proof is
correct, though I'd guess it is, considering the respectability of
the author.



(2) SU(3) -> G_2 -> S^6 represents a generator of pi_5 of SU(3) = Z



I referenced
Lucas M. Chaves and A. Rigas. Complex reflections and polynomial
generators of homotopy groups. J. Lie Theory, 6(1):19–22, 1996.
MR1406003
from which I learned the fact. It seems hard to believe that there
isn't an early reference.



(3) A map S^5 -> SU(3) generates pi_5 iff its composition with SU(3) ->
SU(3)/SU(2)= S^{5} is a map S^5 -> S^5 of degree +1 or -1.



I thought I could see this in Eckmann's 1942 thesis, so I that's what I
referenced.



(4) The nontrivial element in pi_4 of SU(2) = Z_2 is represented by
the suspension of the Hopf fibration:
Freudenthal. Uber die Klassen der Spharenabbildungen. I. Große Dimensionen. Compos.
Math., 5:299–314, 1937.



A presentation of the suspension of the Hopf fibration is given by
[0,pi] x SU(2) -> SU(2)
(theta, g) -> g^(-1) exp(mu_3theta) g
where mu_3 is the diagonal generator of su(2) with
exp(mu_3pi) = -1.


I learned this from:
Thomas Puettmann and A. Rigas. Presentations of the first homotopy
groups of the unitary groups. Comment. Math. Helv.,
78:648–662, 2003. math/0301192
but there must be something earlier. Or is it too obvious?



Thanks for any help.



Daniel Friedan

gr.group theory - Sp(2n) intersect Sp(2n,H)? (Please read for explanation of notation)

First let me fix some notation:



Let $O(n)$ be the group of $n times n$ real matrices $T$ which are "orthogonal", $U(n)$ be the group of $n times n$ complex matrices $T$ which are "unitary" and $Sp(n)$ be the group of $n times n$ quaternionic matrices $T$ which are "symplectic" (in all three cases $T^hT=TT^h=I$).



Let $Sp(2n,F)$ be the group of $2n times 2n$ matrices that preserve a non-degenerate skew-symmetric bilinear form on $F^{2n}$, where $F$ is the field of real $mathbb{R}$, complex $mathbb{C}$ or quaternion $mathbb{H}$ numbers (skew-field in the case of quaternions).



The following are true:



$O(2n) cap Sp(2n,mathbb{R}) = U(n)$



$U(2n) cap Sp(2n,mathbb{C}) = Sp(n)$



So my question is about the next logical step. Clearly both $Sp(2n)$ and $Sp(2n,mathbb{H})$ are groups acting on $mathbb{H}^{2n}$ but do they intersect to a non-empty group? In other words what is $X(n)$ below (if anything)?



$Sp(2n) cap Sp(2n,mathbb{H}) = X(n)$?



PS 1
This is a question I naturally asked myself after reading Baez's "Symplectic, Quaternionic, Fermionic" blog posting: http://math.ucr.edu/home/baez/symplectic.html



PS 2
By writing $X(n)$ instead of $X(2n)$ above I am hinting something related to Octonions but I don't want to scare off anyone.

ac.commutative algebra - A finitely generated, locally free module over a domain which is not projective?

It is impossible to produce an example of a finitely generated flat $R$-module that is not projective when $R$ is an integral domain. See: Cartier, "Questions de rationalité des diviseurs en géométrie algébrique," here, Appendice, Lemme 5, p. 249. Also see Bourbaki Algèbre Chapitre X (Algèbre Homologique, "AH") X.169 Exercise Sect. 1, No. 13. I also sketch an alternate proof that there are no such examples for $R$ an integral domain below.



Observe that, for finitely generated $R$-modules $M$, being locally free in the weaker sense is equivalent to being flat [Bourbaki, AC II.3.4 Pr. 15, combined with AH X.169 Exercise Sect. 1, No. 14(c).]. ($R$ doesn't have to be noetherian for this, though many books seem to assume it.)



There's a concrete way to interpret projectivity for finitely generated flat modules. We begin by translating Bourbaki's criterion into the language of invariant factors. For any finitely generated flat $R$-module $M$ and any nonnegative integer $n$, the $n$-th invariant factor $I_n(M)$ is the annihilator of the $n$-th exterior power of $M$.



Lemma. (Bourbaki's criterion) A finitely generated flat $R$-module $M$ is projective if and only if, for any nonnegative integer $n$, the set $V(I_n(M))$ is open in $mathrm{Spec}(R)$.



This openness translates to finite generation.



Proposition. If $M$ is a finitely generated flat $R$-module, then $M$ is projective iff its invariant factors are finitely generated.



Corollary. The following conditions are equivalent for a ring $R$: (1) Every flat cyclic $R$-module is projective. (2) Every finitely generated flat $R$-module is projective.



Corollary. Over an integral domain $R$, every finitely generated flat $R$-module is projective.



Corollary. A flat ideal $I$ of $R$ is projective iff its annihilator is finitely generated.



Example. Let me try to give an example of a principal ideal of a ring $R$ that is locally free in the weak sense but not projective. Of course my point is not the nature of this counterexample itself, but rather the way in which one uses the criteria above to produce it.



Let $S:=bigoplus_{n=1}^{infty}mathbf{F}_2$, and let $R=mathbf{Z}[S]$. (The elements of $R$ are thus expressions $ell+s$, where $ellinmathbf{Z}$ and $s=(s_1,s_2,dots)$ of elements of $mathbf{F}_2$ that eventually stabilize at $0$.) Consider the ideal $I=(2+0)$.



I first claim that for any prime ideal $mathfrak{p}inmathrm{Spec}(R)$, the $R_{mathfrak{p}}$-module $I_{mathfrak{p}}$ is free of rank $0$ or $1$. There are three cases: (1) If $xnotinmathfrak{p}$, then $I_{mathfrak{p}}=R_{mathfrak{p}}$. (2) If $xinmathfrak{p}$ and $mathfrak{p}$ does not contain $S$, then $I_{mathfrak{p}}=0$. (3) Finally, if both $xinmathfrak{p}$ and $Ssubsetmathfrak{p}$, then $I_{mathfrak{p}}$ is a principal ideal of $R_{mathfrak{p}}$ with trivial annihilator.



It remains to show that $I$ is not projective as an $R$-module. But its annihilator is $S$, which is not finitely generated over $R$.



[This answer was reorganized on the recommendation of Pete Clark.]

Monday, 16 December 2013

Average rate of supernova x number of stars

Not that easy way. Only stars heavy enough will undergo a supernova explosion. The majority of stars is too light. The lifetime of a star is mostly determined by its mass. In some cases (supernova type Ia) a companion star provides mass to white dwarf, which originally has been too light to explode as a supernova.



Hence your technique can only work, if it is applied to stars of a certain mass and composition separately, and then summed or integrated up over all these restricted classes of stars. All together you'll get results for stars which eventually will end up as supernova, i.e. the heavy ones with more than about 8 solar masses.



... And in the meanwhile new stars will form.

the sun - Understanding Earth Tilt, Sun's Position and Lattitude Calculation

Assuming you aren't north of the Arctic Circle or south of the Antarctic Circle, you can determine your latitude my making observations throughout the course of a day, and over the course of a year. You'll need



  • A rather straight stick,

  • A fairly flat piece of ground,

  • A plumb bob (which you can make out of string and a rock),

  • Some small pebbles to mark the tip of the shadow of the stick over the course of a year at solar noon, and

  • A trigonometry table or a calculator.

Use your plumb bob to ensure your stick is as close to vertical as you can make it. You'll want to place the stick in exactly the same place every day.



The first thing you'll want to find is the north-south line that passes through the base of the stick. To do this, place a pebble at the tip of the stick's shadow when the shadow is at its shortest. If you do this perfectly, you'll have the north-south line on day number one. You almost certainly won't do this perfectly, so you'll need to repeat this for a few days. Because the Sun rises more or less in the east and sets in the west, you also know which way is north and which is south. You have a compass.



You'll also have a very rough idea of your latitude. If the Sun sets to the left of the north-south line, you know you are somewhere north of 23.44 south latitude. If it sets to the right, you are south south of 23.44 north latitude.



If you want a better idea of your latitude, keep doing this until the day you see the Sun rise exactly in the East. This happens twice a year, typically on March 20 and September 23. Use a string to measure the length of the stick's shadow when the tip of the shadow crosses the north-south line. You do not need a ruler; all you need to know is the ratio of the shadow's length to the stick's length. Now it's a matter of trigonometry:
$$phi = arctanleft(frac {text{shadow length}}{text{stick length}}right)$$

Sunday, 15 December 2013

Limits in category theory and analysis

I think this doesn't quite work:



Let $mathcal{C}$ be the category whose objects are the point of $X$, and define
$$
mathrm{mor}_mathcal{C}(x,y) = { mbox{closed sets containing both $x$ and $y$} }.
$$
Composition is union.



Now (for example) a sequence ${ x_n}$ in $X$ defines a functor $F: mathbb{N} to mathcal{C}$ and a cone from $F$ to $y$ is essentially a single closed set
containing the entire sequence and $y$. Since this set must contain the topological limit $x$ of the sequence, this means that the cone factors through the same closed set viewed as a morphism $xto y$, so $x$ is the categorical colimit of $F$.



And since the morphism sets are symmetrical, the sequence ${ x_n}$ can be viewed as a contravariant functor $G: mathbb{N}to mathcal{C}$, and the topological limit $x$ is the categorical limit of $G$.



PROBLEM: the factorization is not unique!

Saturday, 14 December 2013

linear algebra - Alternating bilinear forms over local rings

Suppose k is a field and V a vector space over k. If b is an alternating nondegenerate bilinear form in V, it has a symplectic basis. A symplectic basis is a basis where the basis vectors come in pairs, with each pair making a hyperbolic plane and the hyperbolic planes orthogonal, so that the $2 times 2$ matrix for the bilinear form on each hyperbolic plane looks like:



$$begin{pmatrix}0 & 1 \\ -1 & 0 \\ end{pmatrix}$$



More generally, if b is any alternating bilinear form, it has a basis comprising degenerate vectors and a symplectic basis for a complement to the subspace spanned by the degenerate vectors.



I want to generalize this to local rings.



Let R be a local ring, e.g., $R = mathbb{Z}/p^kmathbb{Z}$. Suppose M is a finitely generated R-module and b is an alternating (not necessarily nondegenerate) bilinear form on M. What is the right analogue for a symplectic basis for b?



What I seem to have got is that there is a generating set that includes some degenerate vectors, and other pairs of vectors such that the form looks as follows on the plane spanned by these:



$$begin{pmatrix}0 & p^r \\ -p^r & 0 \\ end{pmatrix}$$



where $0 le r le k - 1$. Moreover, it seems that the number of times each r occurs should be independent of the choice of basis.



I have the following questions:



  1. Is the result stated above correct? What is a precise and correct formulation of this result?

  2. Is there a standard reference or theorem that proves a result similar to what I've outlined above?

  3. Is the result valid, and how is it best interpreted, when M is not a free R-module? In that case, the generating set in terms of which we are writing the matrix is not a freely generating set -- some of the elements may have torsion too.

star - Why do we think that there is no two-solar-mass black hole?

We think that the mass boundary between neutron stars and stellar mass black holes is around three solar masses.



The maximum mass of the neutron stars now is two solar masses and we may find a 2.6 solar mass neutron star in the future. That is based on the equation of state.



The stellar mass black holes may have different origins. Even the smallest stellar mass black hole originates in SN explosion, we do not know the details of the explosion.



Why do not we think a two solar mass BH, or even a smaller BH, could exist?



Why do we think that the companion of Taylor-Hulse binary is a neutron star instead of a BH?

gr.group theory - Finite subgroups of unitary groups

This might help: If $u$ and $v$ are both close enough to scalars in the operator norm,
and $varepsilon$ is small enough, then no matter how large $k$ is, if the group generated by $u^{prime}$ and $v^{prime}$ is finite, then it still has to be Abelian. The reasoning is something like this: I use the operator norm : given a unitary matrix $M$, there is a unique scalar $lambda$ ( in the interior of the unit disk unless $M$ itself is a scalar matrix) with $|M- lambda I|$ minimal, and this only depends on the spectrum of $M$. In particular, we can only have $|M -lambda I| < frac{1}{2}$ for a scalar $lambda$ if the eigenvalues of $M$ all lie on an arc of length less than $frac{pi}{3}$ on the unit circle.It is essentially a Theorem of Frobenius that if $G$ is a finite group of unitary matrices, then the subgroup $H$ generated by the matrices at distance less than $frac{1}{2}$ from a scalar is an Abelian normal subgroup. If $u$ and $v$ are close enough to scalars, and $varepsilon$ is small enough then we can make both $|u^{prime} - lambda I|$
and $| v^{prime} - mu I|$ less than $frac{1}{2}$ for suitable scalars $lambda,mu$. So if they
generate a finite group, it must be Abelian. Note added: I will add a little more detail.
One proof of Jordan's theorem uses a compactness argument and a kind of contraction Lemma.
Let $[a,b]$ denote $a^{-1}b^{-1}ab$. If $a$ and $b$ are unitary, then
$$| I-[a,b] | = |I - a^{-1}b^{-1}ab | = |ab-ba |$$
$$ = | (a-I)(b-I) - (b-I)(a-I)|
leq 2 |a-I| |b-I|$$. Hence if $| I-a| < frac{1}{2}$, then $|I- [a,b] | < |I-b|$
for all $b.$ Hence in a finite group $G$ of unitary matrices, let $H$ be the subgroup generated
by those elements of $G$ with $|I-h|< frac{1}{2}$. For any $h in H$ with
$|I - h | < frac{1}{2}$, we have $[g,h,h, ldots ,h] = 1$ for all $g in G$. By a theorem of Baer, $H$ is nilpotent. With a little more work, you actually get $H$ Abelian. (This is a kind of hybrid of arguments of Frobenius and a little more finite group theory. More or less the same line of reasoning led to a similar theorem by
Zassenhaus on discrete groups, and this led to the idea of a Zassenhaus neighbourhood in a
Lie group). Note that if $x,y$ are in different cosets of $H$ in $G$, we must have
$|x-y| geq frac{1}{2}$, so the compactness of the unit ball of $M_{n}(mathbb{C})$ gives
a fixed bound on $[G:H]$. But the same argument works when $| a - lambda I | < frac{1}{2}$
for some scalar $lambda$, using $$|ab - ba | = | (a-lambda I)(b-mu I) - (b-mu I)(a-lambda I)|$$ for any scalars $lambda,mu$, and replacing $H$ by the group generated by elements at
distance less than $frac{1}{2}$ from any scalar. (Added in response to comment below): You do need a little argument to get from $H$ nilpotent to $H$ Abelian, but the statements above are all accurate. It is important to remember that $H$ is generated by elements within distance $frac{1}{2}$ of a scalar. Any such element has all its eigenvalues on an arc of length less than $frac{pi}{3}$ on $S^{1}$. No subset of the "multiset" of eigenvalues of such an element has zero sum (it's clear eg, if you rotate so that all eigenvalues have positive real part). Let $chi$ be the character of the given representation of $H$. Let $mu$ be an irreducible constituent of $chi$. By the remark above, if $|h -alpha I | < frac{1}{2}$ for some scalar $alpha$, then $mu(h) neq 0.$ But if $mu$ is not linear, it is induced from a character of a maximal subgroup $M$, which is normal as $H$ is nilpotent. But then $mu$ vanishes on all elements of $H$ which are outside $M$. Hence the elements at distance less than $frac{1}{2}$ from a scalar must all lie within $M$. But since $M$ is proper, and such elements generate $H$, this is a contradiction. Hence $mu$ must be linear after all. Since $mu$ was arbitrary, $H$ is Abelian.

Friday, 13 December 2013

gravity - Could black holes be creators of dark matter?

Ultra-high energetic cosmic rays are thought to be caused by black holes as one option. Those energies should be sufficient for the formation of (hypothetical) heavy supersymmetric particles, which should decay to stable (hypothetical) neutralinos, candidates for (hypothetical) WIMPS.



Assuming this theoretical framework, black holes (or similar dense objects) may cause the generation of dark matter. But the amount of dark matter formed that way won't explain the amount of dark matter needed for the Lambda-CDM model. Also dark matter forming this way should be hot (fast moving). Lambda-CDM needs cold dark matter (CDM).

Thursday, 12 December 2013

big picture - How much faith should I put in numerics?

Edit: Let me summarize what this question was meant to ask. Is there a quantitative theory of "approximate" soundness? Arguments are usually either sound or unsound. This is binary. If we don't have access to a complete argument, or are unsure whether to trust parts of an argument, can we come up with a number between 0 and 1 that quantifies how sound the incomplete or untrusted argument is?



Original rambling question below:



Before getting to the question, let me first try to make a rough distinction in what I mean by numerics. There are several types, and I'll begin with what I don't mean by numerics: something like numerical integration, or numerical root finding. I consider this zeroth case uninteresting. Problems like these are completely understood. We are just using computers to do our tedious calculations for us, and I see no reason not to trust the result.



For the first type, I'm thinking of large complex calculations such as those used in the four color theorem, or the recent proof that checkers is a draw, or maybe the recent work on the character table for split E8. Here there are just a finite number of cases to be checked, but the computation itself is extremely elaborate just to compute about one bit of information (in the first two cases). For these types of problems, I'm imagining that there is no known way to generate a certificate that would reduce checking the validity of the calculation to an instance of case 0 numerics.



For the second type, consider numerics of the following flavor. In this scenario, my friend Bernhard has a conjecture that all of the infinitely many non-trivial zeros of a certain function lie on a certain line in the complex plane. Unfortunately, I'm not as good as Bernhard, and I don't understand his insight in proposing the conjecture. Lacking his intuition, and in lieu of a proof, I decide to numerically test the conjecture. I find that the first 1010 zeros (say) all line on the correct line.



Now I can start to get to the question. First of all, there is a fuzzy line between case 0 and case 1. Is there a way to make this line more precise?



Next, can we make precise the notion of "trusting" large calculations of the type 1 variety? How much should we trust them? Also, these types of calculations can be very unsatisfying if they don't lead you to a principle which explains why the answer is what it is. Is there a way to make this notion precise as well? Suppose that the proof of the four color theorem could be reduced to checking only 31 cases, instead of several hundred, but a computer was still necessary. Would we consider this a "good" proof? I would like to try to quantify this.



Finally, consider case 2. Have we really given any support at all to the conjecture if we've left an infinite number of cases unchecked? I'm tempted to answer a knee-jerk "No!" to this, especially in light of things like the disproof of the Mertens conjecture or Skewes' number. But it certainly feels like we've made it more plausible than if we hadn't checked those cases. I'm afraid we might have to resort to Bayesian degrees of belief here, but is there another answer?

lattices - What is a reference for the Hasse-Minkowski classification of indefinite forms?

According to "The Geometry of Four-Manifolds" by Donaldson and Kronheimer, indefinite unimodular forms are classified by their rank, signature and type. This is the Hasse-Minkowski classification of indefinite forms, they say.



However, this seems to be a bit of a folklore theorem, as I cannot find a single citation for it; all of my searches for any permutation of "hasse minkowski indefinite quadratic classification" yield instead a different theorem, namely one about solving the equation $Q(x) = r$ over $mathbb{Q}$ for a given quadratic form $Q$.



Is it simply the case that the integral classification (which essentially states, for the case of an even form, that it is a sum of $E_8$ lattices and Hyperbolics) is an easy consequence of this other theorem? I'm not familiar enough with quadratic forms to see how this should be so.



If it isn't an easy consequence, is there a reference for the integral classification that I just haven't found?

What if the black hole in the center of the galaxy grew faster?

this is not really an answer to your question, but to the assumption you made.



The BH in the galactic center cannot grow that much. There are two problems. First, there is not enough food close by. Any potential food (gas clouds, stars, and dark matter) will have some non-zero angular momentum preventing it from coming close enough. Loosing this angular momentum is difficult (it cannot be radiated away like energy), the only way is to exchange it with other objects either via impact (of gas clouds) or gravitational interactions with other objects (not the BH).



The second problem is that a feeding BH is surrounded by an accretion disc of hot gas. In that disc, angular momentum is slowly transported outwards and mass inwards via viscosity (that viscosity most likely originates from turbulent magnetic fields that become unstable -- the magneto-rotational instability). This process inevitably heats the accretion disc to very high temperatures ($10^{6-9}$K) such that it emits a wind of raditation and particles (similar to the Solar wind, but much much stronger). If the BH feeds too much, this wind becomes so strong that it pushes away any further infalling material (potential food).



This second process limits the growth of any BH to double in mass in no less than about $10^6$ years, I think (I'm not too sure--if you want a precise value, consult the literature), even if the first problem was no issue.

Monday, 9 December 2013

Can an amateur astronomer bounce a laser off the moon?

In the TV show "Big Bang Theory" episode "The Lunar Excitation", the gang fires a laser from their rooftop, bounces it off mirrors on the moon, and measures the laser coming back on a computer.



Is this really possible?



I know scientists have successfully done this, because it's why NASA put the mirrors on the moon.



Wouldn't this require very precise targeting by the laser to hit the mirrors?



My question is, is it possible for an amateur to successfully perform this experiment?

ag.algebraic geometry - what notions are "geometric" (characterized by geometric fibers)?

I'm not sure I'm answering your question, but let me say some (vague and general) things and hopefully they'll be helpful.



Algebraic geometry over algebraically closed fields is really nice, essentially because of the Nullstellensatz: points are actually points, and functions are actually functions on these points. So it's often easy (or classical) to define a property of a variety over an algebraically closed field.



But then Grothendieck comes along and sez, hey, we should say things relatively, over an arbitrary base scheme. And the procedure for that is this: if you have a class of varieties C over algebraically closed fields, you extend it to the relative situation by saying that a map of schemes X --> S is in C if it is flat (which, experience and several nice theorems tell us, amounts to saying that the fibers are continuously parametrized by S) and each geometric fiber is in C.



There are two things that this buys you right off the bat: first, C is closed under base change, and second, C satisfies fpqc descent. Both are great indications that you have a good in-families notion; for instance they are certainly necessary if you want to make a good moduli functor out of C. If you didn't use geometric points, you couldn't guarantee fpqc descent, only Zariski descent.



But it seems like you were more interested in the converse: why should, if we have a good in-families notion, it be sufficient to check it on geometric fibers? Well actually, here "geometric" has nothing to do with it: you should always be able to check on fibers, because you want to be talking about a family of elements of C parametrized by the base. It's just that over non-algebraically closed fields it's probably harder to say what it means to lie in C -- if you don't do it geometrically, you potentially lose stability under base change and descent.

Sunday, 8 December 2013

at.algebraic topology - Formal-group interpretation for Lin's theorem?

Background



For compact Lie groups, Atiyah and Segal proved a strong relationship between Borel-equivariant K-theory, defined in terms of the K-theory of $X times_G EG$, and the equivariant K-theory of X defined in terms of equivariant vector bundles. Roughly, for "nice" spaces X the K-theory of $X times_G EG$ is a completion of the equivariant K-theory of X, and in particular the K-theory of BG is a completion of the complex representation ring of G.



The Segal conjecture is an analogous result proven in subsequent years (by many authors, with Carlsson completing the proof). It's less well-known outside the subject, and obtained by roughly replacing "vector bundles" with "covering spaces" - the original conjecture is that for a finite group $G$, the abelian group of stable classes of maps $varinjlim[S^n wedge BG, S^n]$ has as limit to the Burnside ring of finite $G$-sets. There are further statements describing $varinjlim [S^{n+k} wedge BG, S^n]$ in terms of a completion of certain equivariant stable homotopy groups. It's notable for the fact that it's not really a computational result - we describe two objects as being isomorphic, without any knowledge of what the resulting groups on either side really are.



There are a number of steps in this proof, and over the years most of them have been recast and reinterpreted in a number of ways. However, the initial steps in the proof are computational. Lin proved this conjecture for the case $G = mathbb{Z}/2$, and Gunawardena proved it for the case $G = mathbb{Z}/p$ for odd primes $p$. Lin's original proof involved some very difficult computations in the Lambda algebra and a simplified proof was ultimately written up by Lin-Davis-Mahowald-Adams. It amounts to a computation of certain Ext or Tor groups over the Steenrod algebra - namely, if $H^* mathbb{RP}^infty = mathbb{Z}/2[x]$ has its standard module structure over the Steenrod algebra, then $Ext^{**}(mathbb{Z}/2[x^{pm 1}],mathbb{Z}/2)$ degenerates down to a single nonzero group.



Bordism theory



A lot of the contemporary work in stable homotopy theory uses the relationship between stable homotopy theory and the moduli of formal groups, rather than the Adams-spectral-sequence calculations that are used in the above proofs. The analogous calculation would be the following.



Let L be the Lazard ring carrying the universal formal group law, with 2-series $[2](t)$ whose zeros are the "2-torsion" of the formal group law. Then there is an L-algebra
$$
Q = t^{-1} L[[t]]/[2](t)
$$
whose functor of points would be described (up to completion issues) as taking a ring R to the set of formal group laws on R equipped with a nowhere-zero 2-torsion point. This comes equipped with natural descent data for change-of-coordinates on the formal group law, and so it describes a sheaf on the moduli stack of formal group laws $mathcal{M}$.



A student of Doug Ravenel's (Binhua Mao) proved in his thesis that the analogous Ext-computation is valid in the formal-group setting: namely, if one computes the Ext-groups
$$Ext_{mathcal M}(mathcal{O}, Q otimes omega^{otimes t})$$
where $omega$ is the sheaf of invariant 1-forms on $mathcal{M}$, it converges to a completion of
$$Ext_{mathcal M}({mathcal O}, omega^{otimes t}).$$
(The result was stated in different language, and I am still ignoring completion issues.)



However, as I understand the proof (and I don't claim that I really do!) it essentially uses a reduction to the Adams spectral sequence case by using a filtration that reduces to the group scheme of automorphisms of the additive formal group law, and this is very closely connected to the Steenrod algebra. I would regard it as still being computationally focused, and I don't really have a grip on why one might expect it to be true without carrying the motivation from homotopy theory all the way through.



Question (finally)



Is there is a more conceptual interpretation of this computation in terms of the geometry of the moduli of formal groups?

Saturday, 7 December 2013

ag.algebraic geometry - Is it known that the ring of periods is not a field?

Maybe I can sketch an argument for your first question.



Let $P$ be the ring of effective "formal" periods, generated by quadruples $[X,D,omega,gamma]$ consisting of a smooth projective $Q$-variety $X$, a normal crossing divisor $D$, top-form $omega$, and singular cycle $gamma$, as discussed in Kontsevich-Zagier.



Let $omega: P rightarrow C$ be the ring homomorphism obtained by integration, whose image is the ring of periods that you mention. You ask whether the image $omega(P)$ could be a field.



If it were, then the induced map $Spec(C) rightarrow Spec(P)$ would be a $C$-point of the scheme $Spec(P)$, whose image is a closed point of the scheme. As $P$ is the inductive limit of finitely-generated subrings $P_M$, we find that the induced map $Spec(C) rightarrow Spec(P_M)$ has image a closed point of $Spec(P_M)$ for any motive $M$ which generates a sufficiently large ring of periods (or am I making a mistake about projective limits of affine varieties?).



This, incidentally, is opposite to the expectations of Grothendieck's period conjecture, which states that the image of $Spec(C) rightarrow Spec(P_M)$ should be a generic point!



If the map $Spec(C) rightarrow Spec(P_M)$ has image a closed point, then (since $Spec(P_M)$ is defined over $Q$), its image is a point defined over $bar Q$. It follows that the dimension of the $Q$-Zariski closure of this point is zero.



But this dimension (zero) is equal (as remarked by Yves Andre in his paper "Galois Theory, Motives, and Transcendental Numbers") to the transcendence degree $TrDeg_Q[Per(M)]$, where $Per(M)$ is the set of periods of the motive $M$.



So, if one chooses a motive whose periods generate a transcendental extension of $Q$ (e.g., whose periods include $2 pi i$), one should find a contradiction.



As for the second question... I'll try to say something when I'm not about to meet with students.

gt.geometric topology - Database of polyhedra

As part of many hobbies (origami, sculpting, construction toys) I often find myself building polyhedra from regular polygons. I am intimately familiar with all of the Archimedean and Platonic solids, and can construct most of the other isohedra, deltahedra, and Johnson solids from memory. The smaller prisms, antiprisms, and trapezohedra are of course trivial. However, I often forget the precise arrangement of faces and vertices for some of the Johnson solids and most of the Catalan solids. Thus, the question that I pose is this:



Where can I find the most complete, robustly indexed, and searchable database of polyhedra?



I would like to use such a database to answer, in short order, questions of the following nature:



Which solid is comprised of exactly eight hexagons and six squares? Which solids are comprised of less than 10 triangles, eight squares, and six hexagons? How many solids can be constructed with exactly 24 edges?
What solid with 12 vertices has the most edges (or faces)? etc...



I imagine that such a database does not exist and I am going to be forced to create one, so answers suggesting features for such a database (likely to be web-based) are welcome as well.

pr.probability - Probability of Outcomes Algorithm

Suppose the probability for getting head is $p_i$ for $i$th coin.



You can easily (and economically) compute the probabilities of exactly $k$ heads using the recursive relation -



$H_{n,k}=p_nH_{n-1,k-1}+(1-p_n)H_{n-1,k}$




Explanation follows.



Let $H_{n,k}$ be the probability of getting exactly $k$ heads using the first $n$ coins. For answering the type of questions you want to solve, all you need is a list of $H_{n,k}$'s.



Note that $H_{n,k}=sum_{left[over e_i'sin{0,1},sum_i^n e_i=kright]}prod_{i=1}^n p_i^{e_i}(1-p_i)^{1-e_i}$



The sum (as you mentioned) contains $nchoose k$ entries.



However note that $H_{n,k}=p_nH_{n-1,k-1}+(1-p_n)H_{n-1,k}$



So you can recursively build up the $H_{n,k}$'s which should be simple since there are only a few of them. (To be precise, for $N$ coins, there are $N(N+3)/2$ many of $H_{n,k}$'s since $nin {1,...,N}$ and $kin {0,...,n}$).



As a base for the recursive relation, you can use the following (obvious) identities.



  • $H_{n,k}=0$ for $kgt n$

  • $H_{n,0}=prod_{i=1}^n(1-p_i)$

  • $H_{n,n}=prod_{i=1}^np_i$

Friday, 6 December 2013

health - What are the consequences if an astronaut's helmet gets damaged during a spacewalk?

Don't ever take what's portrayed in a scifi movie as fact. You can use the fingers on one hand to count the number of scifi movies that go out of their way to be faithful to science. "Mission to Mars was not one of them. (Nor was Total Recall.)



Will you die if you take your helmet off in space? Of course. Your brain needs oxygen, as does the rest of your body. Without oxygen, you will go unconscious rather quickly. Then you'll die shortly later. You won't blow up. You won't instantly freeze. You might however get an ugly postmortem sunburn.



From this "Ask an Astrophysicist" page at NASA, Human Body in a Vacuum




You do not explode and your blood does not boil because of the containing effect of your skin and circulatory system. You do not instantly freeze because, although the space environment is typically very cold, heat does not transfer away from a body quickly. Loss of consciousness occurs only after the body has depleted the supply of oxygen in the blood. If your skin is exposed to direct sunlight without any protection from its intense ultraviolet radiation, you can get a very bad sunburn.


Thursday, 5 December 2013

pr.probability - Computing equivalent vector of random variables from covarience matrix

If $A$ is your target covariance matrix and $LL^T = A$, and $x = (x_1, ldots, x_n)$ is a vector of independent random variables with mean zero and variance 1, then $y = Lx$ has the required covariance. Here $L$ is a matrix and $L^T$ is its transpose. $L$ can just be the Cholesky factor of $A$. ((Check: $mathrm{cov}(y) = E[yy^T] = E[(Lx)(Lx)^T] = E[Lxx^TL^T] = LE[xx^T]L^T$ (by linearity of expectation) $= Lmathrm{cov}(x)L^T = LIL^T = LL^T = A$. $mathrm{cov}(y) = E[yy^T]$ because $y$ has mean 0, and likewise for $mathrm{cov}(x)$.)



That's not too far from a "complete" solution, actually. If you start with a vector $y$ of random variables with mean zero and covariance matrix $A$, then if $A = LL^T$ and $x = L^{-1}y$, then $mathrm{cov}(x) = I$. That doesn't necessarily imply that the components of $x$ are independent; it means they are uncorrelated. So the most general construction is to begin with a vector $x$ of uncorrelated random variables with mean zero and variance 1 and let $y=Lx$. (I only mean that every example can theoretically be obtained that way, not that it's necessarily the best or most computationally efficient way to do it.)

light - How much illumination do the background stars provide?

I'm trying to get a better understanding of what things look like in "outer space." The main problem I'm trying to solve right now, is how much illumination is provided by the background stars. Is it even perceivable? If we were to ignore the light contributions from our sun and local planets, how difficult would it be to see an object (within arm's reach) that was lit only by the remaining ambient light?



Another way this might be thought of is: how dark is the dark side of Pluto? If a person were able to stand on the surface of Pluto, would a moonless/Charonless night be totally pitch black, or would there be enough ambient light to see? How bright might that be?



EDIT: The example about Pluto is a little deceptive because Pluto will block a large portion of starlight. I'm thinking of the starlight that is neither blocked by nor supplemented by any of the bodies in our solar system.

Wednesday, 4 December 2013

velocity - What is the standard reference point for measuring speed?

As Einstein realized and like you correctly state, you indeed can't measure the speed of an object by itself since it has to be measured relative to something else.



As a logic result, if you ask the question "How fast is X moving?", you will have to specify that you want the speed with respect to another object because motion cannot be measured without a reference point.



Some examples:



  • If you ask how fast Earth is moving with respect to its own axis, the center of the Earth will be your reference point.

  • If you want to know how fast the Earth orbits the Sun, the Sun will be your reference point.

  • If you want to know how fast the Moon's distance from Earth increases, Earth will be your reference point.

  • If you want to know the speed of our solar system in the milky way galaxy, the center of the Milky Way will be your reference point.

"standard" reference point



If no reference point is given, it could be assumed that the "standard reference point" is the location of the observer. In the realms of astronomy, that would be the location of the astronomer.



It should be noted that the astronomer could well be on an space mission outside Earth's atmosphere, or he/she could be using a telescope in Earth's orbit. Therefore, "Earth" can not generally be assumed to be a standard reference point for astronomy because, depending on the precision of the measurements and the location of (for example) the telescope, assuming "Earth" may result in inexact measurements.



EDIT



As @TidalWave correctly commented, there's also the International Celestial Reference Frame (ICRS) which can help you find reference points, calculate distances, etc. according to a celestial reference system, which has been adopted by the International Astronomical Union (IAU) for high-precision positional astronomy. The origin of ICRS resides at the barycenter of the solar system.



Wrapping it up:



If no reference point is defined and if we can assume that the astronomer works according to the rules and definitions of the International Astronomical Union (which would be the regular, if not ideal case), the International Celestial Reference Frame provides you with a "standard reference point" at it's center (which is the barycenter of the solar system).



In rare cases where IAU compliance can not be assumed (something that "might" happen in amateur realms), it has to be assumed that the reference point is the point of observation.