Saturday, 31 August 2013

solar system - Source of T Tauri wind?

If you want to trigger outflows, there are two main processes that can help:



  • magnetic field;

  • radiation pressure.

Magnetic fields is well-suited to launch such outflows, through magneto-centrifugal effects. Basically, if there is magnetic fields tighed to the disk surrounding the protostar and the angle between the field lines and the disk is right, a particle from the disk that follows the magnetic field lines can be ejected. This is the well-known Blandford & Payne (1982) mechanism.



Note that, at earlier stages of star formation, magnetic field could also trigger outflows by a "magnetic field tower" growth. Basically, during the collapse of the prestellar core into a protostar, the field lines are dragged and twisted (due to the rotation of the core), the twist propagating from the inner to the outer region of the core, that triggers outflows. This kind of thing is discussedd in Hennebelle & Fromang (2008).



By the way, these two mechanisms are complementary, because slow outflows (of the order of 10 km s$^{-1}$) are observed around young protostar, whereas "fast outflows" -- jets, of the order of 100 km s$^{-1}$ -- are observed around late protostars, and these two mechanisms explained well the former and the later.



For more massive stars, radiation pressure will plays a role in outflows triggering. The protostar radiates away a part of its accreting energy, and strong a radiation pressure can trigger the growth of a kind of "bubble", that is an outflow (you can see Hennebelle & Commerçon (2012) for a brief review of the question).



Regarding your comment on your own question, it is mostly radiation pressure that will clear up the gas around the new-born star, that is clearly not an outflow or anything alike.

Typo/grammar checker for LaTeX

I would cold-heartedly agree with the first post. The best way to check your grammar is to have somebody else proofread your paper for you. For the sake of completeness I will add that there were two old Unix tools for checking writing: style and diction.



http://dsl.org/cookbook/cookbook_15.html#SEC220



I personally have never used them.



@Yoo
Removing LaTeX is fairly easy with sed for instance but there is a tool called detex which will do exactly that for you. However it is not 100% successful and I would still suggest that you read text document.

Friday, 30 August 2013

higher category theory - Derived Algebraic Geometry and Chow Rings/Chow Motives

I don't know much about this stuff, so instead of answering the question, I try to formulate more precise questions, in the hope someone else will take up these questions:



One of the main reason to look for cycles is that they give realizations (their fundamental class) in all cohomology theories, which happen to have special properties (e.g., are Hodge cycles or Tate cycles), and anytime you see a Hodge (or Tate) cycle in cohomology, you expect that it comes from an algebraic cycle (the Hodge or Tate conjecture) and hence similar phenomena should occur in all cohomology theories (i.e., there is a Hodge (or Tate) cycle in all realizations).



Now, if the following were true:



1) Any 'derived algebraic cycle' gives rise to virtual fundamental classes in all cohomology theories, which again are Hodge or Tate cycles.



2) It is not clear that the virtual fundamental classes of 'derived algebraic cycles' are already fundamental classes of real algebraic cycles,



then one might formulate a 'derived' Hodge or Tate conjecture, which would have the same consequences.



Your question has another aspect, which regards a possible framework for working with these motives; I leave this aside as I understand even less about how this should work.

linear algebra - Matrices whose nullspace is nicely shaped

I'm looking for natural conditions on $a_{ij}$ to guarantee that the null space of the $ntimes m$ matrix $A=(a_{ij})$ has a nice basis.



The null space of { {1,-2,1,0,0}, {0,1,-2,1,0}, {0,0,1,-2,1} } is the set vectors $langle x_1,x_2,x_3,x_4,x_5rangle^T$ with $x_1,dots,x_5$ in arithmetic progression or constant, i.e., there is a degree zero or one polynomial $p(t)$ with $x_i = p(i)$. The null space of { {3,3,-23,21,-4}, {6,3,-38,36,-7} } consists of points for which there is an at-most-quadratic $p(t)$ with $x_1=p(1),x_2=p(2),x_3=p(3),x_4=p(4),x_5=p(6)$, with that last 6 not being a typo.



In particular, I need a basis for the null space of the form ${langle 1,1,dots,1rangle^T,langle x_1,dots,x_mrangle^T, dots, langle x_1^{m-n-1},dots,x_m^{m-n-1}rangle^T}$, with the $x_i$ distinct (not necessarily integers).



As another specific example, consider the matrix { {3,-3,1,0,-1}, {20,-16,5,-9,0} }. I happen to know that the null space of this matrix has basis $langle 1,1,1,1,1rangle^T, langle 1,4,7,-1,-2 rangle^T, langle 1^2,4^2,7^2 ,(-1)^2,(-2)^2rangle^T$, but only because I made the matrix that way. Even with a specific matrix such as this, I don't know how to compute such a basis, or to guarantee that one exists or doesn't exist.



Here are the obvious necessary conditions: the rows must be independent; each row must add up to 0; no row can have exactly two nonzero components.



As a specific problem (I've no interest in this as a particular problem, mind you, but it may help the discussion) consider the matrix { {35,-3,-42,10,0}, {15,3,-8,0,-10} }. Does it have such a basis?



For background, I'm looking at constructions of sets $X$ of integers that contain no solutions to a system of linear equations. Such a basis as above means that a solution has x_i in the image of a polynomial, and I already know how to construct sets that don't have those (arithmetic progressions are a special case).

Is it possible to extend the life of a star?


If something crashed into it rearranging its content a bit, could its life be extended?




Yes! This is precisely what we think causes the creation of blue stragglers.



When we look at a cluster of stars, we expect them to all be of roughly the same age. This is usually borne out by observations. Because larger stars evolve faster, the age of a cluster can then be roughly determined by working out the mass of the most massive star still on the main sequence (i.e. still burning hydrogen in the core). We call this the main-sequence turnoff.



But in some clusters, we see and handful of stars that are more massive, but still on the main sequence, even though all the evidence says they should have already evolved into giants and other more-evolved objects. The consensus view is that these stars are at the very least interacting with binary companions, and many are thought to be merger products. i.e. something left over from two stars merging into one. Either two companions in a binary have orbited so close that they've become one star, or two stars have simply collided (which is quite possible in the dense cores of clusters). Depending on what types of star collide, you can basically get a new star where everything is mixed up again, allowing the star to be "born again".

Thursday, 29 August 2013

planet - Doesn't gravity attract objects in space until they collide?

There are other formulas at work, but not any other forces.



You need to take into account ont only the force, thus the acceleration, but also the current velocity of a body orbiting another.



To put it simply: if you move a ball sticked to a rope around your head, the only forces are the tension of the rope and gravity towards the floor. Ignoring gravity, the only force is the tension of the rope, but it anyway does not make the ball to orbit your head, it in fact makes the ball to orbit it, due to the speed you put on it.



The gravity for an orbit, like the rope, causes the already moving object to curve its otherwise stright trajectory into an ellipse/circumference, not to fall to center.

Is there any difference between parallel universes and multiverse?

In my own mind the multiverse could consist of either multiple universes occupying the same "space" (as in different dimensions) and/or different areas of space where what we consider to be our own universe is only one "bubble", outside of which are an infinite number of other bubbles, possibly created by their own Big Bangs.



The first variant above is what I would refer to as parallel universes, i.e. not the "bubbles". Or to put it another way, The different universe instances in the multiverse could be parallel.



Note that my understanding of current hypothesis and theories in this area is very limited. These are just my own thoughts.

exoplanet - What is the process for vetting the 4,175 candidate planets Kepler has discovered?

Planet "candidates" are Kepler Objects of Interest (KOIs)that have a transit-like light curve and have passed a number of observational tests. They are candidates, because although they do show a transit in the light curve of the star in question, there is no independent confirmation of a planetary mass.



One problem to overcome is that of "false positives". There are other astrophysical phenomena besides planets that can mimic an exoplanetary transit in a Kepler light curve. For instance a grazing incidence stellar binary can produce what looks like a transit in a light curve; so too can a chance alignment of a star with a background eclipsing binary star. There are also a number of instrumental anomalies, such that a signal in one part of a CCD or the Kepler field can produce a "ghost" at another position that might look like a transit.



If you can obtain detailed spectroscopy including accurate radial velocity measurements you can usually rule out most of these false positives, largely by getting a mass constraint for the companion.



Even where you have a very clean transit signal and can estimate a planetary radius, a further problem is that a wide range of masses produce objects with very similar radii. ie. more massive brown dwarfs have very similar radii to exoplanets. Again, only a mass estimate, either through radial velocity measurements or sometimes through "transit timing variations" if the object is in a multiple exoplanet systems.



Now the problem with the KOIs is many of these stars are way too faint ($V>14$) to do the kind of detailed spectroscopy that is easily possible on the much brighter exoplanets around stars found in say the HATNET or WASP ground-based surveys of bright stars.



So, what you can do is tackle the problem statistically, by identifying the kinds of false-positives you might have, quantifying their influence, and throwing away suspect objects (see for instance Batalha 2012). Section 4 of this paper describes in detail some of the tests that are done: e.g. comparing the depths of odd and even numbered transits to look for asymmetries that would indicate grazing incidence binaries; looking between transits for a secondary eclipse that would also indicate a stellar companion; the shape of the transit is diagnostic but cannot easily rule out a planetary candidate; searching for motion of the "photocenter" of the source - if the photocenter moves during a transit it could indicate a background, diluted, eclipsing binary lightcurve.



An early paper by Morton & Johnson (2011) claimed on the basis of a population synthesis approach, that that astrophysical false positives were limited to less than 10%. However, a recent paper by Coughlin et al. (2014) discusses how instrumental effects can be tested for by comparing the transit periods with the periods of other known objects in the Kepler field of view. They claim that around 30% of the KOIs may in fact be false positives. Either way it looks like the big majority of the KOIs are indeed exoplanets, but identifying which ones aren't will require detailed follow-up.

rotation - Do solstices and equinoxes shift over time?

The Gregorian Calendar was created so that annual astronomical events, specifically the vernal equinox (used to determine when Easter is), would on average keep their places in the calendar year over time. It is the best official approximation to the definition of the tropical year, which is defined as "the length of time that the Sun takes to return to the same position in the cycle of seasons". Because this calendar describes 97 leap years out of every 400 years, it defines the average year as exactly 365.2425 solar days, or exactly 365 days, 5 hours, 49 minutes, and 12 seconds.



However, the mean tropical year is in reality about 365 days, 5 hours, 48 minutes, and 45 seconds, or 27 seconds shorter.



Because the Gregorian Calendar is based on the tropical year, the calendar dates of the year will keep up with the solstices and equinoxes, and thus the seasons. If this calendar were exactly the length of the tropical year, then the calendar would keep the vernal (northward) equinox around March 20th for all time.



But because of the slight inaccuracy, it will take about 3,200 years (60 s/min * 60 min/hr * 24 hr/day / 27 s/year) for these 27 seconds to add up every year to be 1 full day, and that will result in the solstices and equinoxes marching backwards in the calendar by 1 day every 3,200 years or so, depending on the accuracy of the 27 seconds difference. This very slow shift is due to the slight inaccuracy in the Gregorian calendar in, on average, matching the tropical year, not because of the precession of the equinoxes.



3,200 years from now, if the Gregorian Calendar is still used, the date of the vernal (northward) equinox will be on average one day earlier in March. The precession of the equinoxes will still occur, so the Earth's axis tilt will be significantly different from today. The Earth will be at a noticeably different position with respect to the Sun on the vernal (northward) equinox from where it is today, in 2014, on the vernal (northward) equinox, but it will still be in March.



This inaccuracy may very slowly increase over time, because according to the same Wikipedia page for the tropical year, the tropical year is very slowly getting shorter, and the mean solar day is even more slowly getting longer. But for 10,000 years to come, the Gregorian Calendar will keep the vernal (northward) equinox in March, even if it slowly shifts earlier in the month.



This is in contrast to the scenario that you imply, where the calendar date would correspond to the relative position of the Earth in its orbit around the Sun. That describes the sidereal year, the time taken for the Sun to reach the same spot in the sky relative to the stars, which is 365 days, 6 hours, 9 minutes, and 10 seconds. A sidereal calendar would explain why you might think that precession would cause the dates of equinoxes and solstices to change in the calendar year. That would result in a shift in the calendar of one full month in 1/12th the cycle length of the precession of the equinoxes, or about 1 full month about every 2,000 years.

Wednesday, 28 August 2013

ag.algebraic geometry - Obstructions to descend Galois invariant cycles

Let us keep the notations from above, and let's write $G:=mathrm{Gal}(E/F)$. Let me quickly recall the origin of the Brauer obstruction: it really comes from the Hochschild-Serre spectral sequence



$$H^p(G,E^q(X_E,mathbf{G}_m))Longrightarrow E^{p+q}(X,mathbf{G}_m)$$



(I'm writing $E^{ast}=H^{ast}_{mathrm{et}}$ for étale cohomology here, because the system doesn't seem to like too many subscripts.) If we analyze this in low degrees, this gives us the following classical exact sequence (for any $F$-variety $X$):



$$0to H^1(G,E^0(X_E,mathbf{G}_m))tomathrm{Pic}(X)to H^0(G,mathrm{Pic}(X_E))to H^2(G,E^0(X_E,mathbf{G}_m))tokerleft[mathrm{Br}(X)tomathrm{Br}(X_E)right]$$
$$to H^1(G,mathrm{Pic}(X_E))to H^3(G,E^0(X_E,mathbf{G}_m))$$



So we'd like to generalize the sequence above to the situation of $mathrm{CH}^n(X)=H^n(X,mathcal{K}_n)$ (where $mathcal{K}_n$ is the Zariski-sheafification of the presheaf $K_n$). Assume $X$ geometrically regular here. The Gersten resolution of $mathcal{K}_n$ on $X_E$ is the complex



$$C^{bullet}(X_E)colonquad K_nk(X_E)tobigoplus_{xin X_E^1}K_{n-1}k(x)tobigoplus_{xin X_E^2}K_{n-2}k(x)tocdotstobigoplus_{xin X_E^{n-1}}K_1k(x)tobigoplus_{xin X_E^n}K_0k(x)$$



There's a similar complex $C^{bullet}(X)$ giving the Gersten resolution of $mathcal{K}_n$ on $X$. We regard the complex $C^{bullet}(X_E)$ as a $G$-complex; write $sigma$ for the map



$$C^{n-1}(X_E)=bigoplus_{xin X_E^{n-1}}k(x)^{times}=bigoplus_{xin X_E^{n-1}}K_1k(x)tobigoplus_{xin X_E^n}K_0k(x)=Z^n(X_E)$$



of $G$-modules. I want to argue that the kernel of this map is playing the role of $E^0(X_E,mathbf{G}_m)$ for higher $n$. (When $n=1$, this kernel coincides with $E^0(X_E,mathbf{G}_m)$.)



Now we might hope for a spectral sequence



$$H^p(G,H^q(C^{bullet}(X_E)[n]))Longrightarrow H^{p+q}(C^{bullet}(X)[n])$$



but of course $K$-theory doesn't quite satisfy Galois descent, so we don't have this convergence ($C^{bullet}(X)[n]$ is not the homotopy fixed-point complex of $C^{bullet}(X_E)[n]$). But we're only trying to analyze a very small piece of this spectral sequence — the piece involving $sigma$. For that, Hilbert Theorem 90 does the work, and we get the following exact sequence:



$$0to H^1(G,kersigma)tomathrm{CH}^n(X)to H^0(G,mathrm{CH}^n(X_E))to H^2(G,kersigma)tokerleft[H^2left(G,C^{n-1}(X_E)right)to H^2(G,Z^n(X_E))right]$$
$$to H^1(G,mathrm{CH}^n(X_E))to H^3(G,kersigma)$$



So we find an obstruction to descending cycles of codimension $n$ in $H^2(G,kersigma)$. Is this the sort of thing you had in mind?

Tuesday, 27 August 2013

homological algebra - How does one get the short exact sequence in a two-column spectral sequence?

This follows precisely from the very definition of convergence of the spectral sequence, once one has identified the $infty$-term. It is done with some details in McLeary's User Guide---which is, in my opinion, a very good reference for both the technicalities and the pragmatics of dealing with spectral sequences.



Now, if you are starting with a two column double complex (as opposed to starting with an arbitrary double complex whose spectral sequence has two contiguous columns), you can get the short exact sequences very much 'by hand'.



Indeed, suppose your double complex is $T^{bullet,bullet}=(T^{p,q})_{p,qgeq0}$ and that $T^{p,q}neq0$ only if $pin{0,1}$. If we define complexes $X^bullet$ and $Y^bullet$ with $X^q=T^{0,q}$ and $Y^q=T^{1,q}$, with differentials coming from the vertical differential $d$ of $T^{bullet,bullet}$, then the horizontal differential of $T^{bullet,bullet}$ can be seen as a map of complexes $delta:X^bulletto Y^{bullet}$.



Now, in the spectral sequence induced by the filtration by columns we clearly have $E_1^{0,q}=H^q(X^bullet)$, $E_1^{1,q}=H^q(Y^bullet)$ and the differential on the $E_1$ page is induced by the horizontal differential in $T^{bullet,bullet}$. In other words, the $E_1$ page is more or less the same thing as the map $H(delta):H(X^bullet)to H(Y^bullet)$. It follows that we have short exact sequences $$0to E_2^{0,q}to H^q(X^bullet)xrightarrow{H^q(delta)} H^q(Y^bullet)to E_2^{1,q}to 0,$$ and the spectral sequence dies at the second act for degree reasons.



On the other hand, there is a short exact sequence of complexes $$0to Y[-1]^bullettomathrm{Tot};T^{bullet,bullet}to X^bulletto 0,$$
from which we get a long exact sequence $$cdotsto H^{q-1}(Y^bullet)to H^q(mathrm{Tot}; T^{bullet,bullet})to H^q(X)to H^q(Y)tocdots,$$ in which you can compute directly that the map $H^q(X)to H^q(Y)$ is precisely $H^q(delta)$. Since the first four-term exact sequence identifies for us the kernel and the cokernel of $H^q(delta)$, exactness of the second exact sequence provides the short exact sequence $$0to E_2^{1,q-1}to H^q(mathrm{Tot}; T^{bullet,bullet})to E_2^{0,q}to 0$$ that you wanted.



(It is an extremely instructive exercise to try to see what can one do in this spirit for a three-column double complex, and fighting with this is a great prelude to an actual exposition to spectral sequences...)

Monday, 26 August 2013

impact - How would a planetary defence against comets work?

They used several Mars and earth gravity assists to raise Rosetta's aphelion to around 5 AU. The comet's aphelion is also in that ball park. The comets from Jupiter's neighborhood (about 5 AU) are moving about 40 km/s when they're in our neck of the woods (earth moves about 30 km/s). A comet moving 40 km/s wrt the sun at 1 AU can be moving anywhere from 70 to 10 km/s wrt to the earth, depending on inclination. A head on collision would be 30 + 40. If the comet was moving the same direction as the earth, it'd be 40 - 30.



A comet from the Kuiper Belt would be moving just a hair under escape velocity when in our neighborhood. Escape is sqrt(2) * circular velocity. So a comet from the outer system would be moving 42 km/s wrt to the sun. The velocities wrt to the earth could range from 72 to 12 km/s. The Kuiper Belt is 30 to 50 AU from the sun.



A comet from the Oort Cloud would also be moving a hair under escape velocity, 42 km/s in our neighborhood. But Aphelion would be much higher, up to 50,000 AU. Presumably many of the non-periodic comets are from the Oort. Click the comet links in this list and you can compare discovery date with last perihelion. For example Comet C/2011 W3 (Lovejoy) was discovered November 27, 2011 and it's .55 AU perihelion occurred December 16, 2011.



In JPL's description of the LINEAR NEO search program, they say "… with most of the efforts going into searching along the ecliptic plane where most NEOs would be expected. " But objects from the Oort don't tend to the ecliptic plane. In my opinion a high inclination comet from the Oort with our name might escape undetected until it was way too late to do anything about it. But an Oort comet with our name on it is very unlikely.

Sunday, 25 August 2013

big picture - Given is "model". How many theories may it be a model?

You are using the terms "model" and "theory" in an idiosyncractic way.



In model theory, a model is a first-order structure, that is, a set with some functions, relations and perhaps distinguished elements, called constants. A theory, in contrast, is a collection of assertions, a set of sentences in this language. A given theory, which can be thought of as a set of axioms in the sense that you mentioned, can give rise to many models. And indeed, the Lowenheim-Skolem theorem says that if a theory has an infinite model, then it has infinite models of arbitrarily large cardinality. (Thus, except in trivial cases one cannot uniquely specify a model by giving "some (countable) number of axioms" as you said, since the same axioms will have models of many different sizes.)



Suppose that M is a model in a language of size κ, meaning that the language has κ many possible assertions. In this case, since any given assertion is either true in M or its negation is true in M, the complete theory of M, that is, the set Th(M) consisting of all sentences true in M, will also have size κ. Any subset S of Th(M) will also be true in M, of course. Thus, there are 2κ many theories true in M. For example, if the language has countably many symbols in it, then any given model in this language will satisfy continuum 2ω many theories.



But this answer counts theories as different, when they are different merely as sets of sentences, even when these theories have the same models. But for the purposes of counting theories, it may be more sensible to use another common definition of theory, which is a set of sentences closed under consequence. This amounts to identifying theories that have the same models.



With this second understanding of theory, the answer is a little more subtle. In the empty language, for example, every model is just a naked set, with no structure. There are exactly countably many countable models in this language: one of each finite size and one countably infinite model. If φn is the assertion that there are exactly n objects, then for any set A of natural numbers, we may form the theory TA, which asserts that ¬φn for each n in A. These theories are all inequivalent, and all true in any infinite model. If M is any model, then there are continuum many theories TA that are true in M.



This shows that in fact every model M, in any language, satisfies at least continuum many deductively closed theories.



If the language is larger, with uncountable size κ, then either there are uncountably many relation symbols, uncountably many function symbols or uncountably many constant symbols. In each case, it is a fun exercise to form 2κ many inequivalent theories T in the language. Given any model M, let σ be any sentence false in M. For any theory T containing σ, we may form the theory T' = { σ implies φ | φ in T }. This theory is true in M, since σ is false in M. Thus, by counting theories in this manner, one can show that there are 2κ many inequivalent theories true in M.

Saturday, 24 August 2013

size - How thick can planetary rings be?

There is an explanation for why rings flatten out here. The general mechanism is that particles collide, and gets a very uniform momentum. Thus, any set-up giving unusually thick rings is in essence "cheating".



Here are some ways:



Moons can cause spiral waves in the rings, giving them more of a structure in the z direction. The ones known in Saturn's rings has a modes amplitude of just 10-100 m, but larger Moons can easily increase that.



Another way is simply having massive rings. Then they can not get more flattened, as there are no more empty space to remove.



A tilted ring relative to the Planets orbit around the star is going to experience tidal forces, as long as the radius of the rings is some notable fraction of the planet's orbital radius. From the context that sparked the question, that is not a suitable mechanism though, along with the possibility of having a so low density that particle collisions are rare.



However, more promising:



The halo ring of Jupiter is estimated to be around 12500 km thick (about the same as the diameter of the Earth), and are very fine dust kept from condensing into a disc by both the magnetic fields of Jupiter, and by iterations with the Galilean Moons.



We have four planets with rings in the solar system, so the sample size is quite small. Applying some small-sample-size statistical methodology, in this case an unusual application of the German Tank Problem, we can give a rough but realistic maximum thickness of a ring:



$$N approx m+frac{m}{k}-1$$



Where $m$ is the highest observed value, and $k$ the sample size.



Modified slightly to get a non-integer version that makes some sense, we get:



$$max_{thickness} approx 12500km+frac{12500km}{4} approx 16000km$$



By no means a very certain limit, but at least about what can obtain from what we know.

Friday, 23 August 2013

star - Yellow object between δ Ori and η Ori

Can someone perhaps assist me in identifying the yellow object between δ Ori and η Ori? I took an image of this object, but I can't identify it.



It can be seen on the image below, hosted on the Eagle Creek Obs. site. (http://www.eaglecreekobservatory.org/eco/doubles/images/ori.gif)



It's the orange object on the constellation line between 28 and 34.



Orion Constellation

gt.geometric topology - Can we decompose Diff(MxN)?

The homotopy-type of the group of diffeomorphisms of a manifold are fairly well understood in dimensions $1$, $2$ and $3$. For a sketch of what's known see Hatcher's "Linearization in three-dimensional topology," in: Proc. Int. Congress of. Math., Helsinki, Vol. I (1978), pp. 463-468.



Similarly, the finite subgroups of $Diff(M)$ are well understood in dimensions $3$ and lower. Hatcher's paper is a good reference for that as well, when combined with a few semi-recent theorems.



If you're interested in general subgroups of $Diff(M)$, there's still a fair bit of discussion going on just for subgroups of $Diff(S^1)$, as it contains a pretty rich collection of subgroups.



In high dimensions there's not much known. For example, nobody knows if $Diff(S^4)$ has any more than two path-components. See for example this little blurb. Some of the rational homotopy groups of $Diff(S^n)$ are known for $n$ large enough.



I wrote a survey on what's known about the spaces $Diff(S^n)$, and spaces of smooth embeddings of one sphere in another $Emb(S^j,S^n)$ a few years ago, here.



Getting back to your earlier question, groups of diffeomorphisms of connect-sums can be pretty compicated objects. In dimension $2$ it's already interesting. For example, $Diff(S^1 times S^1)$ has the homotopy-type of $S^1 times S^1 times GL_2(mathbb Z)$. Diff of a connect-sum of $g$ copies of $S^1 times S^1$ has the homotopy-type of a discrete group provided $g>1$, this is called the mapping class group of a surface of genus $g$. It's a pretty complicated and heavily-studied object. In the genus $g=2$ case this group is fairly similar to the braid group on $6$ strands.



In dimension $3$, it's an old theorem of Hatcher's that $Diff(S^1 times S^2)$ doesn't have the homotopy-type of a finite-dimensional CW-complex, as it has the homotopy-type of $O_2 times O_3 times Omega SO_3$. I've been spending a lot of time recently, studying the homotopy-type of $Diff(M)$ when $M$ is the complement of a knot in $S^3$, and knot complements in general. The paper of mine I linked to goes into some detail on this.



From the perspective of differential geometry, the homotopy-type of $Diff(S^n)$ is rather interesting as it's closely related to the homotopy-type of the space of "round Riemann metrics" on $S^n$. This is a classic construction, is outlined in my paper but it goes like this: $Diff(S^n)$ has the homotopy type of a product $O_{n+1} times Diff(D^n)$ where the diffeomorphisms of $D^n$ are required to be the identity on the boundary -- this is a local linearization argument. $Diff(D^n)$ has the homotopy-type of the space of round metrics on $S^n$. The idea is that any two round metrics are related by a diffeomorphism of $S^n$. So $Diff(S^n)$ acts transitively on the space of round metrics (with a fixed volume, say), and the stabilizer of a round metric is $O_{n+1}$ basically by the definition of a round metrics. Kind of silly but fundamental.

math philosophy - Lawvere's "Some thoughts on the future of category theory."

The notion of a "category of Being" that Lawvere discusses there is the notion that more recently he has been calling a category of cohesion . I'll try to illuminate a bit what's going on .



I'll restrict to the case that the category is a topos and say cohesive topos for short. This is a topos that satisfies a small collection of simple but powerful axioms that are supposed to ensure that its objects may consistently be thought of as geometric spaces built out of points that are equipped with "cohesive" structure (for instance topological structure, or smooth structure, etc.). So the idea is to axiomatize big toposes in which geometry may take place.



Further details and references can be found here:



http://nlab.mathforge.org/nlab/show/cohesive+topos .



Let's walk through the article:



One axiom on a cohesive topos $mathcal{E}$ is that the global section geometric morphism $Gamma : mathcal{E} to mathcal{S}$ to the given base topos $mathcal{S}$ has a further left adjoint $Pi_0 := Gamma_! : mathcal{E} to mathcal{S}$ to its inverse image $Gamma^{ast}$, which I'll write $mathrm{Disc} := Gamma^{ast}$, for reasons discussed below. This extra left adjoint has the interpretation that it sends any object $X$ to the set $Pi_0(X)$ "of connected components". What Lawvere calls a connected object in the article (p. 4) is hence one that is sent by $Pi_0$ to the terminal object.



Another axiom is that $Pi_0$ preserves finite products. This implies by the above that the collection of connected objects is closed under finite products. This appears on page 6. What he mentions there with reference to Hurewicz is that given a topos with such $Pi_0$, it becomes canonically enriched over the base topos in a second way, a geometric way.



I believe that this, like various other aspects of cohesive toposes, lives up to its full relevance as we make the evident step to cohesive $infty$-toposes. More details on this are here



http://nlab.mathforge.org/nlab/show/cohesive+(infinity,1)-topos



(But notice that this, while inspired by Lawvere, is not due to him.)



In this more encompassing context the extra left adjoint $Pi_0$ becomes $Pi_infty$ which I just write $Pi$: it sends, one can show, any object to its geometric fundamental $infty$-groupoid, for a notion of geometric paths intrinsic to the $infty$-topos. The fact that this preserves finite products then says that there is a notion of concordance of principal $infty$-bundles in the $infty$-topos.



The next axiom on a cohesive topos says that there is also a further right adjoint $mathrm{coDisc} := Gamma^! : mathcal{S} to mathcal{E}$ to the global section functor. This makes in total an adjoint quadruple



$$
(Pi_0 dashv mathrm{Disc} dashv Gamma dashv mathrm{coDisc}) :=
(Gamma_! dashv Gamma^* dashv Gamma_* dashv Gamma^!) : mathcal{E} to mathcal{S}
$$



and another axiom requires that both $mathrm{Disc}$ as well as $mathrm{coDisc}$ are full and faithful.



This is what Lawvere is talking about from the bottom of p. 12 on. The downward functor that he mentions is $Gamma : mathcal{E} to mathcal{S}$. This has the interpretation of sending a cohesive space to its underlying set of points, as seen by the base topos $mathcal{S}$. The left and right adjoint inclusions to this are $mathrm{Disc}$ and $mathrm{coDisc}$. These have the interpretation of sending a set of points to the corresponding space equipped with either discrete cohesion or codiscrete (indiscrete) cohesion . For instance in the case that cohesive structure is topological structure, this will be the discrete topology and the indiscrete topology, respectively, on a given set. Being full and faithful, $mathrm{Disc}$ and $mathrm{coDisc}$ hence make $mathcal{S}$ a subcategory of $mathcal{E}$ in two ways (p. 7), though only the image of $mathrm{coDisc}$ will also be a subtopos, as he mentions on page 7.



(This has, by the way, an important implication that Lawvere does not seem to mention: it implies that we are entitled to the corresponding quasi-topos of separated bipresheaves, induced by the second topology that is induced by the sub-topos. That, one can show, may be identified with the collection of concrete sheaves, hence concrete cohesive spaces (those whose cohesion is indeed supported on their points). In the case of the cohesive topos for differential geometry, the concrete objects in this sense are precisely the diffeological spaces . )



He calls the subtopos given by the image of $mathrm{coDisc} : mathcal{S} to mathcal{E}$ that of "pure Becoming" further down on p. 7, whereas the subcategory of discrete objects he calls that of "non Becoming". The way I understand this terminology (which may not be quite what he means) is this:



whereas any old $infty$-topos is a collection of spaces with structure , a cohesive $infty$-topos comes with the extra adjoint $Pi$, which I said has the interpretation of sending any space to its path $infty$-groupoid. Therefore there is an intrinsic notion of geometric paths in any cohesive $infty$-topos. This allows notably to define parallel transport along paths and higher paths, hence a kind of dynamics . In fact there is differential cohomology in every cohesive $infty$-topos.



Now, in a discrete object there are no non-trivial paths (formally because $Pi ; mathrm{Disc} simeq mathrm{Id}$ by the fact that $mathrm{Disc}$ is full and faithful), so there is "no dynamics" in a discrete object hence "no becoming", if you wish. Conversely in a codiscrete object every sequence of points whatsoever counts as a path, hence the distinction between the space and its "dynamics" disappears and so we have "pure becoming", if you wish.



Onwards. Notice next that every adjoint triple induces an adjoint pair of a comonad and a monad. In the present situation we get



$$
(mathrm{Disc} ;Gamma dashv mathrm{coDisc}; Gamma) : mathcal{E} to mathcal{E}
$$



This is what Lawvere calls the skeleton and the coskeleton on p. 7. In the $infty$-topos context the left adjoint $mathbf{flat} := mathrm{Disc} ; Gamma$ has the interpretation of sending any object $A$ to the coefficient for cohomology of local systems with coefficients in $A$.



The paragraph wrapping from page 7 to 8 comments on the possibility that the base topos $mathcal{S}$ is not just that of sets, but something richer. An example of this that I am kind of fond of is that of super cohesion (in the sense of superalgebra and supergeometry): the topos of smooth super-geometry is cohesive over the base topos of bare super-sets.



What follows on page 9 are thoughts of which I am not aware that Lawvere has later formalized them further. But then on the bottom of p. 9 he gets to the axiomatic identification of infinitesimal or formal spaces in the cohesive topos. In his most recent article on this what he says here on p. 9 is formalized as follows: he says an object $X in mathcal{E}$ is infinitesimal if the canonical morphism $Gamma X to Pi_0 X$ is an isomorphism. To see what this means, suppose that $Pi_0 X = *$, hence that $X$ is connected. Then the isomorphism condition means that $X$ has exactly one global point. But $X$ may be bigger: it may be a formal neighbourhood of that point, for instance it may be $mathrm{Spec} ;k[x]/(x^2)$. A general $X$ for which $Gamma X to Pi_0 X$ is an iso is hence a disjoint union of formal neighbourhoods of points.



Again, the meaning of this becomes more pronounced in the context of cohesive $infty$-toposes: there objects $X$ for which $Gamma X simeq * simeq Pi X$ have the interpretation of being formal $infty$-groupoids , for instance formally exponentiated $L_infty$-algebras. And so there is $infty$-Lie theory canonically in every cohesive $infty$-topos.



I'll stop here. I have more discussion of all this at:



http://nlab.mathforge.org/schreiber/show/differential+cohomology+in+a+cohesive+topos

ag.algebraic geometry - Why do Todd classes appear in Grothendieck-Riemann-Roch formula?

Yes you are right! You can in fact prove that the Todd class is the only cohomology class satisfying a GRR-type formula.



Indeed, assume that for any smmoth quasiprojective variety $X$, you have an invertible cohomology class $alpha(X)$ satisfying that:



(i) for any proper morphism $f colon X rightarrow Y$ between smooth quasi-projective morphism and for any bounded complex $mathcal{F}$ of coherent sheaves on $X$, $f_{*}(ch(mathcal{F})alpha(X))=ch(f_{!}mathcal{F})alpha(Y)$.



(ii) for any $X$ and $Y$, $alpha(X times Y)=pr_1^*alpha(X) otimes pr_2^*alpha(Y) $
(this is a kind of base change compatibility condition).



Then for any $X$, $alpha(X)$ is the Todd class of $X$.
In fact, it is sufficient to know (i) for closed immersions and (ii) for $X = Y$.



Here is a quick proof:



1-First you prove GRR for arbitrary immersions. This is done in two steps:



(a) $Y$ is a vector bundle over $X$ and f is the immersion of $X$ in $Y$, where $X$ is identified with the zero section of $Y$. Then $mathcal{O}_X$ admits a natural locally free resolution on $Y$ which is the Koszul resolution. Then a direct computation gives you that $ch(mathcal{O}_X)$ is the Todd class of $E^* $, which is therefore the Todd class of the conormal bundle $N^*_{X/Y}$. Thus the Todd class pops out this computation just like in the divisor case.



(b) For an arbitrary closed immersion $f colon X rightarrow Y$, a standard deformation technique (which is called deformation to the normal cone) allows to deform $f$ to the immersion of $X$ in its normal bundle in $Y$, and then to use part (a)



2-Then you compare the two GRR formulas you have for the diagonal injection $delta$ of $X$ in $X times X$: the one with Todd classes and the ones with alpha classes. It gives you the identity $delta_* (td(X) delta^* td(X times X)^{-1}) = delta_* (alpha(X) delta^* alpha(X times X)^{-1})$, so that $delta_* td(X)^{-1} = delta_* alpha(X)^{-1}$.
Then you get $alpha(X)=td(X)$ by applying $pr_1* $.

Thursday, 22 August 2013

the sun - Typical wavelength of solar flare

In the transition from a higher electron energy level to a lower one, say $mmapsto n$, a hydrogen atom emits a photon of wavelength $lambda$ satisfying
$$frac{1}{lambda} = R_inftyleft[frac{1}{n^2}-frac{1}{m^2}right]text{,}$$
where $R_infty = 1.09737315685,mathrm{m}^{-1}$ is the Rydberg constant. For $n=1$, i.e. the destination energy level is the ground state, varying $m$ forms the Lyman series: $text{Ly}_alpha$ ($2mapsto 1$), $text{Ly}_beta$ ($3mapsto 1$), $text{Ly}_gamma$ ($4mapsto 1$), etc. The $n=2$ destination energy level forms the Balmer series: $text{Ba}_alpha$ ($3mapsto 2$), $text{Ba}_beta$ ($5mapsto 2$), etc., which was actually the first series to be discovered, and is frequently labeled simply with hydrogen instead.




What all can be interpreted from this? Is it because, energy of the radiation contained by the flare lies around this wavelength? And why chromosphere ?




A solar flare is a very hot, violent event that radiates energy across the electromagnetic spectrum. The importance of the H-α line is due to the conveniences of observation.



The spectral lines of hydrogen are outside the visible band except for the first four of the Balmer series, from the red H-α line to the violet H-δ line. When an hydrogen ion and an electron recombine into an atom, the result is generally a hydrogen atom in an excited state. Eventually, it decays to the ground state, but it doesn't have to transition directly there, and typically does so in a random sequence of transitions. A very sizable fraction of those transitions, however, include the $3mapsto 2$ jump that produces the H-α line.



Thus, the presence of the H-α line is an easy way to identify ionized hydrogen, and in particular, a sudden brightening of the H-α line in an emission-line spectrum is an indicator that something energetic is happening to ionize the hydrogen (moreso than usual, that is). And that's where the chromosphere, the low-density "atmosphere" surrounding the Sun, comes in: it has an emission-line spectrum, i.e., its spectrum is bright in narrow bands that correspond to its atomic or molecular composition. This is unlike the photosphere, which has an absorption-line spectrum instead.

Wednesday, 21 August 2013

nt.number theory - Cyclic extensions coming from E[p] equiv F[p],

Let p be a prime and let K be a field containing the p'th roots of unity. Let E be an elliptic curve over K. We consider the the moduli problem $Y_E(p)$, which sends L to set of elliptic curves F/L, and symplectic isomorphisms $phi:E[p] rightarrow F[p]$. We know that this moduli problem is representable by a curve over $K$, and we let the compactification of this curve be $X_E(p)$. We know $X_E(p)$ is a twist of $X(p)$. Similarly, we can construct $X_E(p^2)$, and we see that $X_E(p^2)$ is a normal cover of $X_E(p)$. I think the Galois group of $X_E(p^2)/X_E(p)$ is $(Z/pZ)^3$. If that is the case, then given any K point of $X_E(p)$, we can look at fiber over this point. This fiber is defined over K, hence it will define a field extension of K, with Galois group a subset of $(Z/pZ)^3$.



This means, if we have E and F defined over K, with $E[p] equiv F[p]$, then we should be able to construct a cyclic extension of K of order p. What is that extension?

Tuesday, 20 August 2013

ag.algebraic geometry - Projective to Affine?

I think the question raises a valid point.



A very fruitful approach to affine problems was initiated by Iitaka in the 70's which is as follows:



Suppose $V$ is an affine variety and $X$ is a projectivisation such that $D=X-V$ is a divisor with simple normal crossings (SNC). Look at the canonical divisor $K:=K_X$ and the divisor $L:=K+D$ on $X$. Just like, the now classical, theory of Kodaira and others of analysing the multicanonical systems $nK$ of a projective variety, Iitaka proposed to look at $n(K+D)$ to come up with the a kind of classification for the pair $(X,D)$ as one does for projective varieties. Of course the only complete success in classifying varieties until Iitaka's time was for curves and Surfaces (which is also available now for 3-folds), so he and others applied this idea for non-compact (in particular affine) surfaces. Below I shall talk only about surfaces since the appropriate theory for 3-folds has not yet been worked out (as far as I know) and the curve case is extremely well understood and presents no real difficulty, generally speaking.



Just like a Kodaira dimension for projective surfaces, we can define the logarithmic Kodaira dimension of non-compact surfaces which is by definition the rate of growth of $n(K+D)$ as $n$ varies over positive integers. This number, called $barkappa$ can take values $-infty,0,1,2$ (or upto the dimension of the variety in the general case). At this stage one proves a theorem that this number is independent of the compactification $X$ chosen, as long as $D$ is SNC. This gets the theory started and we get a perfect gadget for studying the non-compact (in particular affine) surfaces. The whole project follows Kodaira's classification philosophy that one should develop enough classification theorems for the various $barkappa$ classes and therefore (ideally) answer "all" questions about the non-compact (or affine) varieties. So if you want to answer a question like "are two affine varieties $A,B$ isomorphic or not" then the first thing to look at is their log-Kodaira dimensions. If they turn out to be different then we are done. If they are same then we have to look more closely into that particular $barkappa$ class and either apply the appropriate classification theorems available or formulate and prove one, to decide.



However, just like in the projective case, the general type surfaces are hardest to study and don't always admit any good structure like a fibration over a curve which might have helped in their systematic study. And, by and large, the greatest success story has been in the non general type cases where there is a detailed classification of projective surfaces. Similar difficulties are encountered in the affine case and the $barkappaleq{1}$ affine surfaces are amenable to detailed study. Of course, there are some strong results about general type surfaces also which are in spirit the same as in the case of surface geography problem.



To find out more about these things one may look at Iitaka's book(GTM,76) and Miyanishi's book.

fa.functional analysis - What is a projective space?

I have never seen finite-dimensional projective spaces defined by axioms, only by constructions of some kind from something else related to axioms. For example in algebraic geometry, you can define a projective variety as the Proj of a graded algebra, which could be adequately axiomatic. However, projective space in that setting comes from the relatively unsatisfying "axiom" that the algebra is freely generated from degree 1.



I was about to say that it would be very difficult to make axioms for a Hilbert-projective space that are very different from the axioms for a Hilbert space. But then I thought of a way to do it. A von Neumann algebra is a Banach *-algebra over $mathbb{C}$ that satisfies the $C^*$ axiom and also has a predual as a Banach space. If $mathcal{M}$ is a von Neumann algebra, it has a space $mathcal{M}^diamondsuit$ of pure normal states, by definition the extremal normal, normalized, positive dual vectors. This is a generalized projective space. In particular if $mathcal{M}$ is a Type I factor — the conditions for which need no direct mention of Hilbert spaces — then von Neumann's theorem identifies $mathcal{M}^diamondsuit$ with the space of lines in a Hilbert space $mathcal{H}$. (And of course $mathcal{M}$ itself with $B(mathcal{H})$.) No global phases are ever chosen in the definition.



Does this meet your requirements? My motivation is the fact that quantum probability is the correct probability theory for quantum mechanics, and that in quantum mechanics global phases are always irrelevant. Reflecting that, the global phase doesn't exist in the von Neumann definition of a state.




Qiaochu in the comments makes the point that there is another very important set of axioms for a projective space, namely the classical incidence axioms for a projective geometry. My favorite version is that a projective geometry is a spherical type $A_n$ building. These axioms are different in that they don't even pick a field beforehand. Indeed, there are projective planes that are not the standard projective plane over a field.



It would be interesting to make axioms for a topological type $A_infty$ building corresponding to the projective space of a Hilbert space. It seems plausible, and it could be a very different model from the von Neumann algebra model. But maybe von Neumann, the person, is still there in this idea, because the incidence geometry of a Hilbert space is also known as quantum logic.

Monday, 19 August 2013

ct.category theory - Category of graphs.

It has all finite products, and I think (depending on your defintion of Graph), it is easy to show that it has equalizers, and thus all finite limits.



If you are talking about directed graphs, then this is the topos of presheaves of sets over the category which consists of a pair of parallel arrows, and thus has a slew of wonderful properties, finite limits included.

Sunday, 18 August 2013

speed - What is faster than a supernova explosion?

If you count unique events as well, the fastest known event would be the exponential expansion of the Universe during inflation which lasts from about $10^{-36},rm s$ to $10^{-32},rm s$ right after the Big Bang. In this time, the Universe grew by a factor of about $10^{26}$.



You have to note, however, that space itself expands, so you are not limited to the speed of light which would be the case for phenomena such as stellar explosions or pulsars.

Saturday, 17 August 2013

nt.number theory - Smooth proper schemes over rings of integers with points everywhere locally

There is no doubt that such examples as in David Speyer's response exist: indeed, they exist in great abundance in the following sense:



Let $k_1$ be any number field, and let $E_{/k_1}$ be any elliptic curve with integral $j$-invariant. Then it has potentially good reduction, meaning that there is a finite extension $k_2/k_1$ such that $E_{/k_2}$ is the generic fiber of an abelian scheme over
$mathbb{Z}_{k_2}$. Furthermore, let $N$ be your favorite integer which is greater than $1$. Then there exists a degree $N$ field extension $k_3/k_2$ such that the Shafarevich-Tate group of $E_{/k_3}$ has an element of order $N$ (in fact, one can arrange to have at least $M$ elements of order $N$ for your favorite positive integer $M$): see Theorem 3 of



http://math.uga.edu/~pete/ClarkSharif2009.pdf



Since good reduction is preserved by base extension, the genus one curve $C_{/k_3}$ corresponding to the locally trivial principal homogeneous space of $E_{/k_3}$ of period $N$ gives an affirmative answer to Question 2.



Specific examples of elliptic curves over quadratic fields with everywhere good reduction are known: see e.g. the survey paper



http://mathnet.kaist.ac.kr/pub/trend/shkwon.pdf



where the following example appears and is attributed to Tate:



$E: y^2 + xy + epsilon^2 y = x^3, epsilon = frac{5+sqrt{29}}{2}$,



has everywhere good reduction over $k = mathbb{Q}(sqrt{29})$. Indeed, the given equation is smooth over $mathbb{Z}_k$, since the discriminant is $-epsilon^{10}$ and $epsilon$ is a unit in $mathbb{Z}_k$.



If this elliptic curve happens itself to have nontrivial Sha, great. If not, the theoretical results above imply that a quadratic extension of it will have a nontrivial $2$-torsion element of Sha, i.e., there will exist some hyperelliptic quartic equation



$y^2 + p(x)y + q(x) = 0$



with $p(x), q(x)$ in the ring of integers of some quadratic extension $K$ of $mathbb{Q}(sqrt{29})$, which is smooth over $mathbb{Z}_K$ and violates the local-global principle.



If someone is interested in actually computing the equation, I would say a better strategy is searching for elliptic curves defined over quadratic fields with everywhere good reduction until you find one which already has a 2-torsion element in its Shafarevich-Tate group. (I don't see how to guarantee this theoretically, but I would be surprised if it were not possible.) Then it is easy to write down the defining equation.

gt.geometric topology - Lie algebra automorphisms and detecting knot orientation by Vassiliev invariants

Recall that there are knots in $mathbf{R}^3$ that are not invertible, i.e. not isotopic to themselves with the orientation reversed. However, it is not easy to tell whether or not a given knot is invertible; I believe the easiest known example involves 8 crossings. In particular, all knot invariants that are (more or less) easy to compute (e.g. the Jones or Homfly polynomials) fail to detect knot orientation -- there is a simple Lie algebra trick, the Cartan involution, that prohibits that. But this trick works for complex semi-simple Lie algebras and the question I'd like to ask is: can one perhaps circumvent this by using other kind of Lie algebras?



Here are some more details. A long standing problem is whether or not knot orientation can be detected by finite type (aka Vassiliev) invariants. Recall that these are the elements of the dual of a certain graded vector space $mathcal{A}=bigoplus_{igeq 0} mathcal{A}^i$; the vector space itself is infinite dimensional, but each graded piece $mathcal{A}^i$ is finite dimensional and can be identified with the space spanned by all chord diagram with a given number of chords modulo the 1-term and 4-term relations (the 4-term relation is shown e.g. on figure 1, Bar-Natan, On Vassiliev knot invariants, Topology 34, and the 1-term relation says that the chord diagram containing an isolated chord is zero).



The above question on whether or not finite type invariants detect the orientation is equivalent to asking whether there are chord diagrams that are not equal to themselves with the orientation of the circle reversed modulo the 1-term and 4-term relations. (There are several ways to rephrase this using other kinds of diagrams, see e.g. Bar-Natan, ibid.)



However, although $mathcal{A}^i$'s are finite-dimensional, their dimensions grow very fast as $ito infty$ (conjecturally faster that the exponential, I believe). So if we are given two diagrams with 20 or so chords, checking whether or not they are the same modulo the 1-term and 4-term relations by brute force is completely hopeless. Fortunately, there is a way to construct a linear function on $mathcal{A}$ starting from a representation of a quadratic Lie algebra (i.e. a Lie algebra equipped with an ad-invariant quadratic form); these linear functions can be explicitly evaluated on each diagram and are zero on the relations. So sometimes one can tell whether two diagrams are equivalent using weight functions. But unfortunately, a weight function that comes from a representation of a complex semi-simple Lie algebra always takes the same value on a chord diagram and the same diagram with the orientation reversed.



As explained in Bar-Natan, ibid, hint 7.9, the reason for that is that each complex semi-simple Lie algebra $g$ admits an automorphism $tau:gto g$ that interchanges a representation and its dual. (This means that if $rho:gto gl_n$ is a representation, then $rhotau$ is isomorphic to the dual representation.) Given a system of simple roots and the corresponding Weyl chamber $C$, $tau$ acts as minus the element of the Weyl group that takes $C$ to the opposite chamber. On the level of the Dynkin diagrams $tau$ gives the only non-trivial automorphism of the diagram (for $so_{4n+2}, ngeq 1,sl_n,ngeq 3$ and $E_6$) and the identity for other simple algebras. (Recall that the automorphism group of the diagram is the outer automorphism group of the Lie algebra.)



However, so far as I understand, the existence of such an automorphism $tau$ that interchanges representations with their duals is somewhat of an accident. (I would be interested to know if it isn't.) So I'd like to ask: is there a quadratic Lie algebra $g$ in positive characteristic (or a non semi-simple algebra in characteristic 0) and a $g$-module $V$ such that there is no automorphism of $g$ taking $V$ to its dual?



(More precisely, if $rho:gto gl(V)$ is a representation, we require that there is no automorphism $tau:gto g$ such that if we equip $V$ with a $g$-module structure via $rhotau$, we get a $g$-module isomorphic to $V^{ast}$.)

Friday, 16 August 2013

the sun - Observed sun/moon rise/set times to nearest second?

Where can I find a list of observed (not computed) sun/moon rise/set
times to the nearest second, along with the location (latitude,
longitude, altitude [or enough information to find these three
values]) and, ideally, weather conditions (temperature/pressure)?



Since it's hard to tell where the "middle" of the sun/moon is when
it's setting, these would be the setting times for the lower or upper
limb of the sun/moon.



Reason: I'm trying to compute sun/moon rise/set to the nearest second
using high-precision data, and I'm curious to see if I can get
anywhere near the actual observed time. My technique:



  • Use NASA's Chebyshev approximations to find the positions of the
    Sun and center of the Earth.


  • Use an elliptical (not spherical) model of the Earth to find the
    ICRF xyz coordinates of a given location at a given time. This
    includes compensating for altitude and precession.


  • Model refraction based on weather conditions, including adjusting pressure for altitude (weather stations report air pressure normalized to sea level, which isn't the same as actual air pressure, especially at higher altitudes).


I realize down-to-the-second computations of sun/moon rise/set are
probably impossible, in part because refraction depends on more than
just local conditions, but I'm curious to try.

soft question - Examples of great mathematical writing

This question is basically from Ravi Vakil's web page



How do I write mathematics well? Learning by example is more helpful than being told what to do, so let's try to name as many examples of "great writing" as possible. Asking for "the best article you've read" isn't reasonable or helpful. Instead, ask yourself the question "what is a great article?", and implicitly, "what makes it great?"



If you think of a piece of mathematical writing you think is "great", check if it's already on the list. If it is, vote it up. If not, add it, with an explanation of why you think it's great. This question is "Community Wiki", which means that the question (and all answers) generate no reputation for the person who posted it. It also means that once you have 100 reputation, you can edit the posts (e.g. add a blurb that doesn't fit in a comment about why a piece of writing is great). Remember that each answer should be about a single piece of "great writing", and please restrict yourself to posting one answer per day.



I refuse to give criteria for greatness; that's your job. But please don't propose writing that has a major flaw unless it is outweighed by some other truly outstanding qualities. In particular, "great writing" is not the same as "proof of a great theorem". You are not allowed to recommend anything by yourself, because you're such a great writer that it just wouldn't be fair.



Not acceptable reasons:



  • This paper is really very good.

  • This book is the only book covering this material in a reasonable way.

  • This is the best article on this subject.

Acceptable reasons:



  • This paper changed my life.

  • This book inspired me to become a topologist. (Ideally in this case it should be a book in topology, not in real analysis...)

  • Anyone in my field who hasn't read this paper has led an impoverished existence.

  • I wish someone had told me about this paper when I was younger.

Approaches to Riemann hypothesis using methods outside number theory

I have no idea to what extent the idea of Saharon Shelah, about which I read in David Ruelle's popular account the mathematician's brain that uses mathematical logic to prove the RH is promising, but certainly it is different. For as far as I can understand (from Ruelle), it basically comes down to proving that RH is undecidable in Peano arithmetic, in which case the consistency of Peano arithmetic would imply its truth (also in ZFC).



EDIT: Here is the quote from Shelah's paper:



2.3 Dream: Prove that the Riemann Hypothesis is unprovable in PA, but is
provable in some higher theory.



What basis does my hope for this dream have? First, the solution of Hilbert’s
10th problem tells us that each problem of the form “is the theory ZFC +φ consistent” can be translated to a (specific) Diophantine equation being unsolvable
in the integers, moreover the translation is uniform (this works for any reasonable
(defined) theory, where consistent means that no contradiction can be proved from
it). Second, we may look at parallel development “higher up”; as the world is quite
ordered and reasonable.



Note that there is a significant difference between $Pi_2$ sentences (which say, e.g.,
for a given polynomial $f$, the sentence $varphi_f$ saying that for all natural numbers
$x_0 , ldots , x_{n−1}$ there are natural numbers $y_0 , ldots , y_m$ such that $f (x_0 , ldots , y_0 , ldots ) = 0$) and $Pi_1$ sentences saying just that, e.g., a certain Diophantine equation is unsolvable. The first ones can be proved not to follow from PA by restricting ourselves to a proper initial “segment” of a nonstandard model of PA. For $Pi_1$ sentences, in some sense proving their consistency show they are true (as otherwise PA is inconsistent). Naturally, concerning statements in set theory, models of ZFC are more malleable, as the method of forcing shows.

Thursday, 15 August 2013

fa.functional analysis - Schwartz kernel theorem for A-linear operators

Let $X,Y subset mathbb{R}^n$ be open subsets. Denote by $C^infty(X)$ the smooth functions on $X$, let $mathcal{E}'(Y)$ be its dual space considered as a space of distributions. Let $L(C^infty(X), mathcal{E}'(Y))$ be the continuous linear operators. This space itself is equipped with the topology of bounded convergence (see for example Operator topology).



An instance of the Schwartz kernel theorem states that



$L(C^infty(X), mathcal{E}'(Y)) simeq mathcal{E}'(X times Y)$,



which morally says that I can write every continuous linear operator $T colon C^infty(X) to mathcal{E}'(Y)$ as an integral operator with a distributional kernel $k(y,x) in mathcal{E}'(X times Y)$ as



$
(Tu)(y) = int k(y,x)u(x) dx
$



(where the integral denotes the pairing of functions with distributions).



Now let $E$ and $F$ be Hilbert $A$-modules for a $C^*$-algebra $A$. I can still consider smooth functions with values in the Banach space $E$. Also $F$-valued distributions still make sense. They can be defined by



$mathcal{E'}(Y,F) = L(Y,F) = mathcal{E}'(Y) hat{otimes} F$,



where the tensor product is the completion of the projective tensor product (well, since $mathcal{E}'(Y)$ is nuclear this doesn't really matter).




I was wondering if in this case there
is an analogue of the Schwartz kernel
theorem with $A$-linearity and
adjointability build into it, thus
stating something like



$L_A^*(C^infty(X,E),
> mathcal{E}'(Y,F)) simeq
> mathcal{E}'(X times Y,
> text{Hom}_A^*(E,F))$



Here the left hand side should denote
adjointable continuous $A$-linear
operators (part of the question is to
find the right notion of
adjointability, $A$-linearity should
be clear) and $text{Hom}_A^*(E,F)$
are the adjointable $A$-linear
operators from $E$ to $F$.




I know that there exists a Schwartz kernel theorem for vector-valued distributions (see e.g. theorem 1.8.9 in http://user.math.uzh.ch/amann/files/distributions.pdf). This looks like



$mathcal{E}'(X times Y,Z') simeq L(C^infty(X), mathcal{E}'(Y,Z'))$



for a Banach space $Z$ and I heard that Schwartz has proven similar theorems in much greater generality, but without considering Hilbert modules of course. The above theorem is close to what I want, if $text{Hom}_A^*(E,F)$ has a predual, then it implies



$mathcal{E}'(X times Y,text{Hom}_A^*(E,F)) simeq L(C^infty(X), mathcal{E}'(Y,text{Hom}_A^*(E,F)))$.

ag.algebraic geometry - Why would one expect a derived equivalence of categories to hold?

Another motivation is more basic. For example, in the modular representation theory of finite groups it is often the case that one has two blocks $A$ and $B$ of two different groups and one suspects (for example) that both blocks have the same number of simple modules (an example of this is given by Alperin's conjecture).



Now, suppose that $A$ and $B$ are Morita equivalent. Then not only do $A$ and $B$ have the same number of simple modules, but specifying a Morita equivalence gives a bijection between the simple modules.



Often, however, $A$ and $B$ are not Morita equivalent, but rather derived equivalent. The Grothendieck groups of $D^b(A)$ and $D^b(C)$ have a basis given by the classes of the simple modules of $A$ and $B$. A derived equivalence induces an isomorphism between the Grothendieck groups, and hence $A$ and $B$ have the same number of simple modules if they are derived equivalent. Note now, however, that the derived equivalence does not induce a bijection between simple modules, because simple modules need not correspond under the derived equivalence.



As an example of this approach is Broué's abelian defect group conjecture (which predicts a derived equivalence between certain $A$ and $B$). It implies Alperin's conjecture (in the abelian defect case), but provides a structural reason for the equality (and also implies much more structural results about characters, "perfect isometries" ...).



Hence, one may search for a derived equivalence to give a structure explanation for various concrete numerical equalities.



(I think another example of this is given by Bezrukavnikov's use of perverse coherent sheaves on the nilpotent cone to explain some numerical equivalences observed by Vogan, but I don't know much about this.)

Wednesday, 14 August 2013

lo.logic - Nelson's program to show inconsistency of ZF

At the end of the paper Division by three by Peter G. Doyle and John H. Conway, the authors say:



Not that we believe there really are any such things as infinite sets, or that the Zermelo-Fraenkel axioms for set theory are necessarily even consistent. Indeed, we’re somewhat doubtful whether large natural numbers (like $80^{5000}$, or even $2^{200}$) exist in any very real sense, and we’re secretly hoping that Nelson will succeed in his program for proving that the usual axioms of arithmetic—and hence also of set theory—are inconsistent. (See Nelson [E. Nelson. Predicative Arithmetic. Princeton University Press, Princeton, 1986.].) All the more reason, then, for us to stick with methods which, because of their concrete, combinatorial nature, are likely to survive the possible collapse of set theory as we know it today.



Here are my questions:



What is the status of Nelson's program? Are there any obstruction to finding a relatively easy proof of the inconsistency of ZF? Is there anybody seriously working on this?

adams operations - Is the Burnside ring a lambda-ring? + conjecture in Knutson p. 113

Warning: I'll be using the "pre-$lambda$-ring" and "$lambda$-ring" nomenclature, as opposed to the "$lambda$-ring" and "special $lambda$-ring" one (although I just used the latter a few days ago on MO). It's mainly because both sources use it, and I am (by reading them) slowly getting used to it.



Let $G$ be a finite group. The Burnside ring $Bleft(Gright)$ is defined as the Grothendieck ring of the category of finite $G$-sets, with multiplication defined by cartesian product (with diagonal structure, or at least I have difficulties imagining any other $G$-set structure on it; please correct me if I am wrong).



For every $ninmathbb{N}$, we can define a map $sigma^n:Bleft(Gright)to Bleft(Gright)$ as follows: Whenever $U$ is a $G$-set, we let $sigma^n U$ be the set of all multisets of size $n$ consisting of elements from $U$. The $G$-set structure on $sigma^n U$ is what programmers call "map": an element $gin G$ is applied by applying it to each element of the multiset. This way we have defined $sigma^n U$ for every $G$-set $U$; we extend the map $sigma^n$ to all of $Bleft(Gright)$ (including "virtual" $G$-sets) by forcing the rule



$displaystyle sigma^ileft(u+vright)=sum_{k=0}^isigma^kleft(uright)sigma^{i-k}left(vright)$ for all $u,vin Bleft(Gright)$.



Ah, and $sigma^0$ should be identically $1$, and $sigma^1=mathrm{id}$. Anyway, this works, and gives a "pre-$sigma$-ring structure", which is basically the same as a pre-$lambda$-ring structure, with $lambda^i$ denoted by $sigma^i$. Now, we turn this pre-$sigma$-ring into a pre-$lambda$-ring by defining maps $lambda^i:Bleft(Gright)to Bleft(Gright)$ by



$displaystyle sum_{n=0}^{infty}sigma^nleft(uright)T^ncdotsum_{n=0}^{infty}left(-1right)^nlambda^nleft(uright)T^n=1$ in $Bleft(Gright)left[left[Tright]right]$ for every $uin Bleft(Gright)$.



Now, let me quote two sources:



Donald Knutson, $lambda$-Rings and the Representation Theory of the Symmetric Group, 1973, p. 107: "The fact that $Bleft(Gright)$ is a $lambda$-ring and not just a pre-$lambda$-ring - i. e., the truth of all the identities - follows from [...]"



Michiel Hazewinkel, Witt vectors, part 1, 19.46: "It seems clear from [370] that there is no good way to define a $lambda$-ring structure on Burnside rings, see also [158]. There are (at least) two different choices giving pre-$lambda$-rings but neither is guaranteed to yield a $lambda$-ring. Of the two the symmetric power construction seems to work best."
(No, I don't have access to any of these references.)



For a long time I found Knutson's assertion self-evident (even without having read that far in Knutson). Now I tend to believe Hazewinkel's position more, particularly as I am unable to verify one of the relations required for a pre-$lambda$-ring to be a $lambda$-ring:



$lambda^2left(uvright)=left(lambda^1left(uright)right)^2lambda^2left(vright)+left(lambda^1left(vright)right)^2lambda^2left(uright)-2lambda^2left(uright)lambda^2left(vright)$ for $Bleft(Gright)$.



What also bothers me is Knutson's "conjecture" on p. 113, which states that the canonical (Burnside) map $Bleft(Gright)to SCFleft(Gright)$ is a $lambda$-homomorphism, where $SCFleft(Gright)$ denotes the $lambda$-ring of super characters on $G$, with the $lambda$-structure defined via the Adams operations $Psi^nleft(varphileft(Hright)right)=varphileft(H^nright)$ (I think he wanted to say $left(Psi^nleft(varphiright)right)left(Hright)=varphileft(H^nright)$ instead) for every subgroup $H$ of $G$, where $H^n$ means the subgroup of $G$ generated by the $n$-th powers of elements of $H$. This seems wrong to me for $n=2$ and $H=left(mathbb Z / 2mathbb Zright)^2$ already. And if the ring $Bleft(Gright)$ is not a $lambda$-ring, then this conjecture is wrong anyway (since the map $Bleft(Gright)to SCFleft(Gright)$ is injective).



Can anyone clear up this mess? I am really confused...



Thanks a lot.

Tuesday, 13 August 2013

How long does it typically take to make a telescope yourself for the first time?

Making your own telescope is very do-able given basic working skills. Usually a DOB or Newtonian reflector is a good choice. You can buy a mirror and components all ready to assemble or you can choose to grind your own (which is more difficult). If you have an astronomy club in your area check to see if they can help putting your project together. If not, check online ( Youtube, etc) for guidance. Websites like Edmonds Optics may still be a good choice for components. Once you get all the components together, they can be assembled in a pretty short time. Good Luck!

Monday, 12 August 2013

What is the largest hydrogen-burning star?

I assume by largest, you mean largest radius.



Well it won't be VV Cep B since this is merely a B-type main sequence star.



O-type main sequence stars are known and these have both larger masses and larger radii on the main sequence (when they are burning hydrogen in their cores).



A selection of the most massive objects can be found in the R136 star forming region in the Large Magellanic Clouds. If you look at this list (though I recommend having a look at the primary literature), you will see that O3V stars are listed. Such objects are also present in our Galaxy, for instance in the supercluster NGC 3603 (Crowther & Dessart 1998).



Such stars have masses of maybe $100 M_{odot}$, luminosities of $2times 10^{6} L_{odot}$ and temperatures of 50,000 K. Using Stefan's law, we can deduce radii of $sim 20 R_{odot}$.



There are suggestions that even more massive main sequence stars have existed in R136 and NGC 3603 (see Crowther et al. 2010), which are now seen as evolved Wolf-Rayet objects, possibly up to $300 M_{odot}$ on the main sequence (though this is a model-dependent extrapolation), and these would have had radii $>20 R_{odot}$.



In the very early universe, population III main sequence stars without metals could have been much more massive and larger.

at.algebraic topology - A chain homotopy that does not arise from a homotopy of spaces?

Algebraic topologists like to cook up algebraic invariants on topological spaces in order to answer questions, so they are often concerned with how strong those invariants are. Currently, I am concerned with just how much information is lost when moving from a space to `the' chain complex associated to that space.



Now, I should be a bit more specific here. There are many homology theories in which one takes a space, cooks up a chain complex, and takes its homology. I am mainly interested in just singular homology for this question, but if you can only think of an answer using sheaf cohomology or some other homology theory then that's alright. Now, the actual question is:




Do there exist two spaces that have chain homotopic associated chain complexes, but are not, themselves, homotopic?




I imagine that, unless the answer to the question is "no," it would be a bit difficult to show that the two spaces are not homotopic, since we have taken away the tool we usually use to prove such facts. My first thought was that one might be able to cook up a counterexample by looking at two spaces whose compactifications give different homologies... but I wasn't able to come up with anything immediately. (Another method may be looking at homotopy groups... but they're so hard to compute, I didn't even try this approach). I probably didn't give this as much thought as I should have, so if the answer to this question is somewhat trivial, then go ahead and scold me and I'll go put some more effort into thinking about it.

rotation - Could the planets in the Solar System have been captured?

I'll address the second question first, Venus uniquely has retrograde rotation; how would this have come about based on current theories? The answer is that there are a number of hypotheses related to Venus's rotation. See the related question What is the current accepted theory as to why Venus has a slow retrograde rotation?. None of them involve capture of a rogue planet. Note that this also applies to Uranus; it too has an odd orientation.



So why not? Note that I am now addressing your first question, is it possible that the planets could have been captured by the Sun's gravitational force after drifting through space while retaining their axis of spin and speed?



The answer is simple. Capture is extremely unlikely (and that's putting it mildly), and it is not needed to explain the odd orientations of Venus and Uranus.

Sunday, 11 August 2013

rt.representation theory - Which is the correct universal enveloping algebra in positive characteristic?

This is an extension of this question about symmetric algebras in positive characteristic. The title is also a bit tongue-in-cheek, as I am sure that there are multiple "correct" answers.



Let $mathfrak g$ be a Lie algebra over $k$. One can define the universal enveloping algebra $Umathfrak g$ in terms of the adjunction: $text{Hom}_{rm LieAlg}(mathfrak g, A) = text{Hom}_{rm AsAlg}(Umathfrak g, A)$ for any associative algebra $A$. Then it's easy enough to check that $Umathfrak g$ is the quotient of the free tensor algebra generated by $mathfrak g$ by the ideal generated by elements of the form $xy - yx - [x,y]$. (At least, I'm sure of this when the characteristic is not $2$. I don't have a good grasp in characteristic $2$, though, because I've heard that the correct notion of "Lie algebra" is different.)



But there's another good algebra, which agrees with $Umathfrak g$ in characteristic $0$. Namely, if $mathfrak g$ is the Lie algebra of some algebraic group $G$, then I think that the algebra of left-invariant differential operators is some sort of "divided-power" version of $Umathfrak g$.



So, am I correct that this notions diverge in positive characteristic? If so, does the divided-power algebra have a nice generators-and-relations description? More importantly, which rings are used for what?

nt.number theory - ray class field of rational function field

The minimal polynomial is $phi_f(X)/X$, where $phi_g$ (the Carlitz module) is defined by being $mathbb{F}_q$-linear in $g$, satisfy



$phi_{T^{n+1}} = phi_T(phi_{T^n})$ and $phi_T =X^q+TX$.



It even has the bonus of being an Eisenstein polynomial at $f$.

Saturday, 10 August 2013

gravity - Dark Energy Expansion

Is Cosmos seriously using that exact number? Egads... if they are, don't take it too seriously, but otherwise they're probably conceptually correct.




How do we know it was dark energy?




In cosmology, the ΛCDM model fits numerous observations, most notably those observed by the WMAP satellite, but also others, to the Friedmann-Robertson-Walker family of solutions of general relativity, which describe a homogeneous and isotropic universe. The $Lambda$ refers to the cosmological constant, which is the simplest model of dark energy, and is equivalent to an intrinsic energy density of the vacuum--hence, it must be Lorentz-invariant, and since time and space mix in a Lorentz transformation, the only way for that to be so is for its pressure to be equal to exactly the negative of its energy density. With $Lambda>0$, the pressure is negative, and this is what's responsible for the outward gravitational acceleration of the universe.




Given we can't define or detect dark energy, why do we believe this is the cause?




The cosmological constant was defined eight decades before the discovery of cosmic acceleration, and we believe it to be the cause because it's theoretically very simple and it fits observations. There are other models for dark energy, though.




How do we even know this new acceleration happened?




Cosmological evolution is described by the Friedmann equations, which has the principal contribution of radiation, matter, and dark energy, in addition to spatial curvature of the universe. As the universe expands, both radiation and matter get diluted, but dark energy does not. Therefore, if the universe keeps expanding, the proportion of dark energy becomes dominant over both radiation and matter.



Most likely, the article is simply an exaggerated reference to the transition between the matter-dominated era and the dark-energy-dominated era. The particular number $6,771,500,000$ years seems comically precise for much of anything in cosmology, but since the nine-year best-fit WMAP+eCMB+BAO+H₀ (see above WMAP link) for the age of the universe is $13.772pm0.059,mathrm{Ga}$, I propose the following explanation:



  1. The author hears that the age of the universe is $13.772$ billion years, and completely ignores any error bounds on that estimate.

  2. The author hears from somewhere else that the dark-energy-dominated era began when the universe was around $7$ billion years old, and again treats this number as exact.

  3. The author then subtracts the two.

This accounts for all but the last $0.5,mathrm{Ma}$, which about one-ten-thousandth. I conclude that my theory of the origin of the number agrees rather well with the empirical evidence of the author's words. Perhaps the $0.5$ as the next digit was rationalized as some sort of midway-point in the author's mind.



As to where $7$ billion years comes from, I have two guesses.



  1. In cosmology, the Hubble parameter is sometimes taken to be unitless, by scaling to $100,mathrm{km/s/Mpc}$. For example, we sometimes say that $h = 0.704$ when we mean that $H_0 = 70.4,mathrm{km/s/Mpc}$. Thus, $100,mathrm{km/s/Mpc}$ is kind of a de facto standard of comparison. It might be a coincidence, but it's rather suspicious that if one ignores the contribution of radiation (which is justifiable, since it's negligible except in the very early universe) and pretends that $H_0 = 100,mathrm{km/s/Mpc}$, then the age of the universe when the matter and dark energy densities are equal would be
    $$frac{2ln(1+sqrt{2})}{3sqrt{1-Omega_text{M}}}frac{1}{H_0}biggvert_{Omega_text{M} = 0.2865} approx 0.70H_0^{-1}biggvert_{h = 1.0} approx 6.8,mathrm{Ga}$$
    I'm using the Ryden reference given in wiki for matter-dominated era above. Thus, I think someone did a guesstimate using a nice, round number for the Hubble parameter and rounded to $7$ billion years.

  2. The transition between matter-dominated and dark-energy-dominated is a somewhat arbitrary matter of convention. We could take it mean when matter and dark energy densities are equal, as above, but on the cosmological scale, pressure is three times as important as energy density (cf. Friedmann equations). Thus, we could take it instead as the time when the matter density was twice the dark energy density. This would be earlier than the $9.8,mathrm{Ga}$ given in the Ryden reference, but arguably more appropriate because the Friedmann equations would predict zero acceleration, i.e., it would be the transition between inward and outward acceleration. I haven't calculated when that would be, but this sort of conventional fuzziness is might be responsible for the discrepancy between the wiki's matter-dominated era and the dark-energy-dominated era pages. Although the latter claims $5,mathrm{Ga}$ instead, rather than $7,mathrm{Ga}$ (but it doesn't explain the source).

So Cosmos may or may not be right about dark-energy-dominated era beginning when the universe about $7$ billion years old, but concluding that it happened $6.7715,mathrm{Ga}$ ago is certainly taking some numbers as too exact to be justifiable.




Something in the red shift?




I think at this point you're better off making a separate question on specific issues with the ΛCDM model, if you have any. For the purposes of this question, I'm taking ΛCDM as a given, as almost certainly Cosmos is doing as well.




Yes that is a lot of questions, but the main question is how do we know an accelerated expansion happened at that time?




The standard cosmological model is that dark energy is a cosmological constant. Thus, its energy density does not change. It then becomes straightforward that at some point, cosmological expansion makes matter too dilute to compete with dark energy, which is when dark energy starts dominating. It was there all along, but before then, matter was dense enough to be dominant instead.

Friday, 9 August 2013

ct.category theory - Set-theoretic forcing over sites?

To the best of my knowledge, this has never been "officially" described in the set theoretic literature. This has been described by Blass and Scedrov in Freyd's models for the independence of the axiom of choice (Mem. Amer. Math. Soc. 79, 1989). (It is of course implicit and sometimes explicit in the topoi literature, for example Mac Lane and Moerdijk do a fair bit of the translation in Sheaves in Geometry and Logic.) There are certainly a handful of set theorists that are well aware of the generalization and its potential, but I've only seen a few instances of crossover. In my humble opinion, the lack of such crossovers is a serious problem (for both parties). To be fair, there are some important obstructions beyond the obvious linguistic differences. Foremost is the fact that classical set theory is very much a classical theory, which means that the double-negation topology on a site is, to a certain extent, the only one that makes sense for use classical set theory. On the other hand, although very important, the double-negation topology is not often a focal point in topos theory.




Thanks to the comments by Joel Hamkins, it appears that there is an even more serious obstruction. In view of the main results of Grigorieff in
Intermediate submodels and generic extensions in set theory, Ann. Math. (2) 101 (1975), it looks like the forcing posets are, up to equivalence, precisely the small sites (with the double-negation topology) that preserve the axiom of choice in the generic extension.

Thursday, 8 August 2013

co.combinatorics - The density hex

For a closely related question when you do not insist that all non zero components of v-w has the same sign, then the answer is known: See the following paper: B. Bollobas, G. Kindler, I. Leader, and R. O'Donnell, Eliminating cycles in the discrete torus, LATIN 2006: Theoretical informatics, 202{210, Lecture Notes in Comput. Sci., 3887, Springer, Berlin, 2006. Also: Algorithmica 50 (2008), no. 4, 446-454. This Graph is referred to as G_inf and there is a beautiful new proof via the Brunn Minkowski's theorem by Alon and Feldheim. For this graph a rather strong form of a density result follows, and the results are completely sharp.



The paper by Alon and Klartag http://www.math.tau.ac.il/~nogaa/PDFS/torus3.pdf is a good source and it also studies the case where we allow only a single non zero coordinate in v-u. An even sharper result is given in another paper by Noga Alon. There, there is a log n gap which can be problematic if we are interested in the case that n is fixed and d large. See also this post.



As Harrison points out, the graph he proposes (that we can call the Gale-Brown graph) is in-between the two graphs. So the unswer is not known but we can hope that some discrete isoperimetric methods can be helpful.



The statement is an isoperimetric-type result so this can be regarded as a quantitative version of the topological notion of connectivity.



Two more remarks: 1) The Gale result seems to give an example of a graph where there might be a large gap between coloring number and fractional coloring. This is rare and an important other example is the Kneser graph where analyzing its chromatic number is a famous use (of Lovasz) of a topological method.



2) Hex is closely related to planar percolation and the topological property based on planar duality is very important in the study of planar percolation and 1/2 being the critical probability. (See eg this paper) It seems that we might have here an interesting high dimensional extension with some special significance to chosing each vertex with probability 1/d.

ag.algebraic geometry - What do heat kernels have to do with the Riemann-Roch theorem and the Gauss-Bonnet theorem?

Added 2 June:



Since the summary below is already a bit long, I thought I'd add a few lines at the beginning as a guide. The proofs all proceed as follows:



  1. Identify the quantity of interest (like the Euler characteristic) as the index of an operator going from an 'even' bundle to an 'odd' bundle.


  2. Use Hodge theory to write the index in terms of the dimensions of harmonic sections, i.e., kernels of Laplacians.


  3. Use the heat evolution operator for the Laplacians and 'supersymmetry' to rewrite this as a 'supertrace.'


  4. Write the heat evolution operator in terms of the heat kernel to express the supertrace as the integral of a local density.


  5. Use the eigenfunction expansion of the heat kernel to identify the constant (in time) part of the local density.


Most of this is general nonsense, and the difficult step is 5. By and large, the advances made after the seventies all had to do with finding interpretations of this last step that employed intuition arising from physics.




I suffered over this proof quite a bit in my pre-arithmetic youth and wrote up
a number of summaries. A condensed and extremely superficial version is given here, mostly for my own review.
If by chance someone finds it at all useful, of course I will be delighted. I apologize that I don't say anything about physical
intuition (because I have none), and for repeating parts of the previous nice answers.
It's been years since I've thought about these matters, so I will forgo
all attempts at even a semblance of analytic rigor. In fact, the main pedagogical reason for posting is that a basic outline of the proof is possible to understand with almost no analysis.



The usual setting has a compact Riemannian manifold $M$, two hermitian bundles $E^+$ and $E^-$, and a linear
operator
$$P:H^+rightarrow H^-,$$
where $H^{pm}:=L^2(E^{pm})$.
With suitable assumptions (ellipticity), $ker(P)$ and $coker(P)$
have finite dimension, and the number of interest is the index:
$$Ind(P)=dim(ker(P))-dim(coker(P)).$$
This can also be expressed as
$$dim(ker(P))-dim(ker(P^{*})),$$ where
$$P^{*}:H^-rightarrow H^+$$
is the Hilbert space adjoint. A straightforward generalization of the Hodge theorem allows us also to write this in terms of Laplacians
$Delta^+=P^* P$ and $Delta^-=PP^*$
as
$$dim(ker(Delta^+))-dim(ker(Delta^-)).$$
Things get a bit more tricky when we try to identify the index with the expression ('supertrace,' so-called)
$$Tr(e^{-tDelta^+})-Tr(e^{-tDelta^-}).$$
The operator
$$e^{-tDelta^{pm}}:H^{pm}rightarrow H^{pm}$$
sends a section $f$ to the solution of the heat equation
$$frac{partial}{partial t} F(t,x)+Delta^{pm}F(t,x)=0$$
($x$ denoting a point of $M$) at time $t$ with intial condition $F(0,x)=f(x).$
One important part of this is that there are discrete Hilbert direct sum decompositions
$$H^+=oplus_{lambda} H^+(lambda)$$
and $$H^-=oplus_{mu} H^-(mu)$$
in terms of finite-dimensional eigenspaces for the Laplacians with non-negative eigenvalues. And then, the identities
$$Delta^-P=PP^{*}P=PDelta^+$$
and
$$Delta^+P^{*}=P^{*}PP^{*}=P^{*}Delta^-$$
show that the (supersymmetry) operators $P$ and $P^{*}$ can be used to define isomorphisms between all non-zero eigenspaces of the two Laplacians with
a correspondence for eigenvalues as well.
Thus, once you believe that the exponential operators are trace class,
it's easy to see that the only contributions to the trace are from the kernels of the plus and minus Laplacians. This is the 'easy cancellation' that occurs in this proof.
But on the zero eigenspaces, the heat evolution operators are clearly the identity, allowing us to identify the supertrace with the index.
To summarize up to here, we have
$$Ind(P)=Tr(e^{-tDelta^+})-Tr(e^{-tDelta^-}).$$
This identity also makes it obvious that the supertrace is in fact independent of $t>0$.



The proofs under discussion all have to do with identifying this supertrace in terms of local expressions that
relate naturally to characteristic classes. The beginning of this process involves first writing the operator
$e^{-tDelta^+}$ in terms of an integral kernel
$$K^+_t(x,y)=sum_i e^{-tlambda_i } phi^+_i(x)otimes phi^+_i(y)$$
where the $phi^+_i$ make up an orthonormal basis of eigenvectors for the Laplacian.
That is,
$$[e^{-tDelta^+}f](x)=int_M K^+_t(x,y)f(y)dvol(y)=sum_i e^{-tlambda_i } int_M phi^+_i(x) langle phi^+_i(y),f(y)rangle dvol(y).$$
Formally, this identity is obvious, and the real work consists of the global analysis necessary to justify the formal computation.
Obviously, there is a parallel discussion for $Delta^-$. Now, by an infinite-dimensional version of the formula
that expresses the trace of a matrix as a sum of diagonals, we get that
$$Tr(e^{-tDelta^+})=int_M Tr(K^+_t(x,x))dvol(x)=int_M sum_ie^{-tlambda_i}||phi^+_i(x)||^2 dvol(x),$$
an integral of local (point-wise) traces, and similarly for $Tr(e^{-tDelta^-})$. One needs therefore, techniques to evaluate the density



$$sum_ie^{-tlambda_i}||phi^+_i(x)||^2 dvol(x)-sum_ie^{-tmu_i}||phi^-_i(x)||^2 dvol(x).$$



More analysis gives an asymptotic expansion for the plus and minus densities of the form
$$ a^{ pm }_{-d/2}(x) t^{-d/2}+a^{ pm }_{d/2+1}(x) t^{-d/2+1}+cdots $$
where $d$ is the dimension of $M$.



Up to here the discussion was completely general, but then the proof begins to involve special cases, or
at least, broad division into classes of cases. But note that even for the special cases mentioned in the original question,
one would essentially carry out the procedure outlined above for a specific operator $P$.



The breakthrough in this line of thinking
came from Patodi's incredibly complicated computations for the operator $d+d^*$
going from even to odd differential forms,
where one saw that the
$$a^{+}_i(x)$$
and
$$a^{-}_i(x)$$
canceled each other out locally, that is, for each point $x$, for all the
terms with negative $i$. I think it was fashionable to refer to this cancellation as 'miraculous,' which it is, compared to the easy cancellation above.
At this point, Patodi could take a limit
$$lim_{trightarrow 0}[sum_ie^{-tlambda_i}||phi^+_i(x)||^2 dvol(x)-sum_ie^{-tmu_i}||phi^-_i(x)||^2 dvol(x)],$$
that he identified with the Euler form. This important calculation set a pattern that recurred in all other versions of
the heat kernel approach to index theorems. One proves the existence of an analogous
limit as $trightarrow 0$ and identifies it. The identification
as a precise differential form representative for a characteristic class is referred to sometimes as a local index theorem, a statement
more refined than the topological formula for the global index. There is even a beautiful version of a local families index theorem
that relates eventually to deep work in arithmetic intersection theory and Vojta's proof of the Mordell conjecture.



As I understand it, Gilkey's contribution was an invariant theory
argument that tremendously simplified the calculation and allowed a differential form representative for
the $hat{A}$ genus to emerge naturally
in the case of the Dirac operator. And then, I believe there is a $K$-theory argument that deduces the index theorem for a general elliptic operator
from the one for the twisted Dirac operator.



Experts can correct me if I'm wrong, but
from a purely mathematical point of view, essentially all the work on the heat kernel proof was done at this point.
Subsequent interpretations of the proof (more precisely, the supertrace), in terms of supersymmetry, path integrals, loop spaces, etc., were tremendously
influential to many areas of mathematics and physics, but the mathematical core of the index theorem itself appears to have remained largely unchanged for almost forty years. In particular, the terminology I've used myself above, the super- things, didn't occur at all in the original papers of Patodi, Atiyah-Bott-Patodi, or Gilkey.



Added:



Here is just a little bit of geometric-physical intuition regarding the heat kernel in the Gauss-Bonnet case, which I'm sure is completely banal for most people. The density
$$sum_ie^{-tlambda_i}||phi^+_i(x)||^2 dvol(x)-sum_ie^{-tmu_i}||phi^-_i(x)||^2 dvol(x)$$
expresses the heat kernel in terms of orthonormal bases for the even and odd forms. When $trightarrow infty$ all terms involving the positive eigenvalues decay to zero, leaving only contributions from orthonormal harmonic forms. This is one way to to see that the integral of this density, which is independent of $t$, must be the Euler characteristic. On the other hand, as $trightarrow 0$, the operator
$$K^+_t(x,y)dvol(y)=[sum_i e^{-tlambda_i } phi^+_i(x)otimes phi^+_i(y)]dvol(y)$$
literally approaches the identity operator on all even forms (except for the fact that it diverges). That is, the heat kernel interpolates between the identity and the projection to the harmonic forms, in some genuine sense expressing the diffusion of heat from a point distribution to a harmonic steady-state. A similar discussion holds for the odd forms as well. I can't justify this next point even vaguely at the moment, but one should therefore think of $$[K^+_t(x,y)-K^-_t(x,y)]dvol(y)$$ as regularizing the current on $Mtimes M$ given by the diagonal $Msubset Mtimes M$. Thus, the integral of $$[TrK^{+}_t(x,x)-TrK^-_t(x,x)]dvol(x)$$ ends up computing a deformed self-intersection number of the diagonal in $Mtimes M$. From this perspective, it shouldn't be too surprising that the Euler class, representing exactly this self-intersection, shows up.



Added:



I forgot to mention that the Riemann-Roch case is where $$P=bar{partial}+bar{partial}^*$$
going from the even to the odd part of the Dolbeault resolution associated to a holomorphic vector bundle. The limit of the local density is a differential form representing the top degree portion of the Chern character of the bundle multiplied by the Todd class of the tangent bundle. Perhaps it's worth stressing that these special cases all go through the general argument outlined above.