Wednesday, 30 January 2013

zeta functions - Periods and L-values

A famous theorem of Euler is that Zeta(2n) is a rational number times pi^(2n). Work of Kummer, Herbrand, Ribet and others shows that the rational multiplier has number theoretic significance.



For more general L-functions attached to motives, the philosophy has emerged (Deligne, Beilinson, Bloch, Kato, etc.) that (in vague terms) their values at certain integers are algebraic multiples of transcendental numbers and that the particular algebraic number that's a multiple of the transcendental number contains information about the motive that the L-function is attached to.



But a (nonzero) algebraic multiple of a transcendental number is again transcendental, so an arbitrary real number does not have a well defined decomposition as a product of an algebraic number and a transcendental number.



Still, because of the theorems of Kummer, et. al. one suspects that powers of pi are (at least close to) the "right" "transcendental parts" of the L-function values to be looking at. Maybe one should really be looking a powers of 2pi? But it seems clear that one should not be looking at powers of 691*pi because otherwise the statement of Kummer's criterion for the regularity of a prime would have an exceptional clause involving the prime 691.




Is there a conceptually motivated means of picking out the
"right" "transcendental part" of a special value of an L-function?




Presumably the reason that Euler expressed his theorem in terms of pi is because pi was a commonly used symbol. (I've heard people argue that 2 pi is more conceptually primitive and that a label should have been made for the quantity 2 pi rather than for pi and am not sure what I think about this). In any case, there should be an a priori means to pin down the relevant transcendental number down "on the nose" (not up to a rational/algebraic multiple).



I have heard that Beilinson's conjecture give the transcendental number only up to a rational multiple and that the Bloch-Kato conjecture pins down the number. But I don't know enough to understand the statement of either conjecture and so am at present ill equipped to derive insight from reading the paper of Bloch and Kato. Are there more elementary considerations that give insight into how to pick out a particular transcendental number out of the set of all of its algebraic multiples?

Is the theory of incidence geometry complete?

As Greg explains, the theory of projective planes obeying Desargues is basically equivalent to the theory of division rings, while the theory of projective planes obeying Desargues and Pappus as equivalent to the theory of fields. I haven't seen an axiomitization of projective planes with betweenness, but I assume that this would turn into the theory of ordered fields.



To finish the answer, one should say that none of these theories are complete. For example, the Fano plane is realizable in $KP^2$ if $K$ has characteristic $2$, but not otherwise. There is an example, which I am too lazy to draw, of an arrangement of points and lines which is true in $KP^2$ if and only if $K$ contains a square root of $5$. Thus, $mathbb{R}P^2$ can be distinguished from $mathbb{Q}P^2$, even though both obey Desargues and Pappus, and presumably whatever axioms of betweenness you want to impose.



You should be able to adapt the proof of Mnev's universality theorem (see also) in order to show that, if $K$ and $L$ are fields which can be distinguished by some first order property, then the projective planes $KP^2$ and $LP^2$ can be similarly distinguished.

lo.logic - Has decidability got something to do with primes?

Note: I have modified the question to make it clearer and more relevant. That makes some of references to the old version no longer hold. I hope the victims won't be furious over this.




Motivation:



Recently Pace Nielsen asked the question "How do we recognize an integer inside the rationals?". That reminds me of this question I had in the past but did not have chance to ask since I did not know of MO.



There seems to be a few evidence which suggest some possible relationship between decidability and prime numbers:



1) Tameness and wildness of structures



One of the slogan of modern model theory is " Model theory = the geography of tame mathematics". Tame structure are structures in which a model of arithmetic can not be defined and hence we do not have incompleteness theorem. A structure which is not tame is wild.



The following structures are tame:



  • Algebraic closed fields. Proved by Tarski.


  • Real closed fields e.g $mathbb{R}$. Proved by Tarski.


  • p-adic closed fields e.g $ mathbb{Q}_p$. Proved by Ax and Kochen.


Tame structures often behave nicely. Tame structures often admits quantifier elimination i.e. every formula are equivalent to some quantifier free formula, thus the definable sets has simple description. Tame structures are decidable i.e there is a program which tell us which statements are true in these structure.



The following structures are wilds;



Wild structure behaves badly (interestingly). There is no program telling us which statements are true in these structures.



One of the difference between the tame structure and wild structure is the presence of prime in the later. The suggestion is strongest for the case of p-adic field, we can see this as getting rid of all except for one prime.



2) The use of prime number in proof of incompleteness theorem



The proof of the incompleteness theorems has some fancy parts and some boring parts. The fancy parts involves Godel's Fixed point lemma and other things. The boring parts involves the proof that proofs can be coded using natural number. I am kind of convinced that the boring part is in fact deeper. I remember that at some place in the proof we need to use the Chinese Remainder theorem, and thus invoke something related to primes.



3) Decidability of Presburger arithmetic and Skolem arithmetic ( extracted from the answer of Grant Olney Passmore)



Presburger arithmetic is arithmetic of natural number with only addition. Skolem arithmetic is arithmetic of natural number with only multiplication.



Wishful thinking: The condition that primes or something alike definable in the theory will implies incompleteness. Conversely If a theory is incomplete, the incompleteness come from something like primes.




Questions: (following suggestion by François G. Dorais)



Forward direction: Consider a bounded system of arithmetic, suppose the primes are definable in the system. Does it implies incompleteness.



Backward direction: Consider a bounded system of arithmetic, suppose the system can prove incompleteness theorem, is primes definable in the system? is the enumeration of prime definable? is the prime factoring function definable?




Status of the answer:



For the forward direction: A weak theory of prime does not implies incompleteness. For more details, see the answer of Grant Olney Passmore and answer of Neel Krishnaswami



For backward direction: The incompleteness does not necessary come from prime. It is not yet clear whether it must come from something alike prime. For more details, see the answer of Joel David Hamkins.




Since perhaps this is as much information I can get, I accept the first answer by Joel David Hamkins. Great thanks to Grant Olney Passmore and Neel Krishnaswami who also point out important aspects.



Recently, Francois G. Dorais also post a new and interesting answer.

gr.group theory - Split powers of the multiplicative group of a field

Re: "I don't know an example of an abelian group $G$ such that $G^{(I)}$ is not a direct summand of $G^I$, but I'm pretty sure that there is one."



Let $G$ be the the integers, and $I$ a countable indexing set. If $G^{(I)}$ were a direct summand, let $P$ be a complement summand.



We arrive at a contradiction as follows: First, $P$ is isomorphic to $G^{I}/G^{(I)}$ which contains the element $(2,4,8,16,32,ldots)$ modulo $G^{(I)}$, which (as we can peal off any of the initial terms) is a non-zero element which is infinitely divisible by 2. Second, $P$ is a subgroup of $G^{I}$, which has no infinitely divisible elements (other than zero).




I think this argument might be modified to show that the algebraic closure of a finite field will give you the counter-example you need (changing "divisibililty" to some sort of degree consideration), but I don't have a lot of time to think about it right now. I'll come back later if someone else doesn't answer your question fully.




Back now. Try the following. Let $K$ be the field obtained by adjoining to $mathbb{Q}$ the $2^{p}$th root of each prime prime $p$ (in $mathbb{Z}$). Let $G$ be the multiplicative group of $K$. Suppose by way of contradiction that $G^{(I)}$ is a direct summand of $G^{I}$, and let $C$ be a complement. As before, $C$ is isomorphic to $G^{I}/G^{(I)}$, and the element $(2,3,5,ldots )$ modulo $G^{I}$ is infinitely divisible by $2$ (thinking of ``divisibility'' multiplicatively in this case--in other words, after chopping off the front, we can take square roots as many times as we want). However, I believe it is the case that there is no element of $G^{I}$ which is infinitely divisible in this sense. (I'll leave it to the experts to prove this, but I think some form of Kummer theory would suffice. But it may be difficult to prove it.) [One last edit: I think it may even be easier to look at $mathbb{Q}(x_1,x_2,ldots)$ and adjoin a $2^{n}$th root of $x_{n}$, and modify the example accordingly.]

rt.representation theory - Decomposing tensor products of irreducible representations of reductive groups over a finite field

Both Tamas and Victor have pointed to the best current literature for general linear groups or others of Lie type, but maybe it's useful to add a series of small comments to supplement their answers.



1) The finite general linear groups, like their classical counterparts, are by far the best-behaved groups of Lie type for representation theory (both ordinary and modular) and related combinatorics. Even so, all aspects of the theory involve deep ideas and indirect approaches as Lusztig's work over many decades demonstrates. Moreover, even related groups like $GL, PGL, SL, PSL$ have character theories with varying degrees of difficulty. While Green's 1955 paper provides an algorithmic way to work out characters of finite general linear groups, the special linear groups have required much more sophisticated methods developed first by Deligne-Lusztig. It's usually not easy for groups of Lie type so say clearly what it means to "know" the characters of the group.



2) For any finite group, tensor products of irreducible representations (or products of characters) can be arbitrarily difficult to work out in detail even after character tables are in hand. It's much harder for groups of Lie type, where knowledge of characters is usually algorithmic. The best hope of getting uniform answers is to ask about tensor products of "generic" irreducibles, which predominate when both the underlying prime $p$ and its power $q$ grow large.
Deligne-Lusztig theory constructs virtual characters, but "most" of these tend to be genuine characters and already show nice patterns in their distribution into series correlated with the sizes of classes in the Weyl group. Thus for $PGL_2(mathbb{F}_q)$ roughly half of the characters have degree $q+1$ (principal series) and half have degree $q-1$ (discrete series). At the other extreme are the Steinberg character of degree $q$ and the trivial character (these are the "unipotent" characters).



3) As Victor points out, Lusztig's summary in rank 1 shows indirectly how to answer a natural qualitative question: in a "typical" tensor product of two irreducible characters, how often do principal series and discrete series characters appear? (Answer: about equally often.) Similar questions for other groups get much more complicated to settle.



4) Lusztig's character results for Lie types B, C, D show how dramatically the complexity increases, even though the results can be organized combinatorially. To study tensor products requires asking the right questions.



5) In the original question, Vinoth comments: but I'm not really interested in exceptional groups of Lie type (for those, this problem should be a standard mindless computation). Actually, to say anything interesting about tensor products here would require a creative approach. For example, what is shown by Lusztig about the characters of $E_8(q)$ is subtle and computationally not so easy to work with. There are 166 unipotent characters, whose degrees are polynomials in $q$, but roughly a third of them fail to occur as constituents of the character induced from the trivial character of a Borel subgroup. Knowing these characters is essential to producing the full character table, etc.



6) Since the Steinberg character occurs at the other extreme from "generic" characters, it's interesting to ask how its product with other irreducible characters will decompose. This has echoes in modular representation theory, as seen in recent work by Hiss and Zalesski, and is difficult to sort out even for type A. Lots of questions out there about tensoring.

Tuesday, 29 January 2013

Riemannian Geometry

To get a better feel of the Riemann curvature tensor and sectional curvature:



  1. Work through one of the definitions of the Riemann curvature tensor and sectional curvature with a $2$-dimensional sphere of radius $r$.

  2. Define the hyperbolic plane as the space-like "unit sphere" of $3$-dimensional Minkowski space, defined using an inner product with signature $(-,+,+)$. Work out the sectional and Riemann curvature of that

  3. Repeat #1 and #2 for the $n$-dimensional sphere and hyperbolic space, as well as flat space

Sectional curvature determines Riemann curvature:



That the sectional curvature uniquely determines the Riemann curvature is a consequence of the following:



  1. The Riemann curvature tensor is a quadratic form on the vector space of $Lambda^2T_xM$

  2. The sectional curvature function corresponds to evaluating the Riemann curvature tensor (as a quadratic form) on decomposable elements of $Lambda^2T_xM$

  3. There is a basis of $Lambda^2T_xM$ consisting only of decomposable elements

Added in response to Anirbit's comment



Perhaps you shouldn't try to compute the curvature too soon. First, make sure you understand the Riemannian metric of the unit sphere and hyperbolic space inside out. There are many ways to do this. But the most concrete way I know is to use stereographic projection of the sphere onto a hyperplane orthogonal to the last co-ordinate axis. Either the hyperplane through the origin or the one through the south pole works fine. This gives you a very nice set of co-ordinates on the whole sphere minus one point. Work out the Riemannian metric and the Christoffel symbols. Also, work out formulas for an orthonormal frame of vector fields and the corresponding dual frame of 1-forms. Figure out the covariant derivatives of these vector fields and the corresponding dual connection 1-forms.



After you do this, do everything again with hyperbolic space, which is the hypersurface



$-x_0^2 + x_1^2 + cdots + x_n^2 = -1$ with $x_0 > 0$



in Minkowski space with the Riemannian metric induced by the flat Minkowski metric. You can do stereographic projection just like for the sphere but onto the unit $n$-disk given by



$x_1^2 + cdots + x_n^2 = 1$ and $x_0 = 0$,



where the formula for the hyperbolic metric looks just like the spherical metric in stereographic co-ordinates but with a sign change in appropriate places. This is the standard conformal model of hyperbolic space.



After you understand this inside out, you can use these pictures to figure out why the $n$-sphere and its metric is given by $O(n+1)/O(n)$ and hyperbolic space by $O(n,1)/O(n)$ and why the metrics you've computed above correspond to the natural invariant metric on these homogeneous spaces. You can then check that the formulas for invariant metrics on homogeneous spaces give you the same answers as above.



Use references only for the general formulas for the metric, connection (including Christoffel symbols), and curvature. I recommend that you try to work out these examples by hand yourself instead of trying to follow someone else's calculations. If possible, however, do it with another student who is also trying to learn this at the same time.



If, however, you want to peek at a reference for hints, I recommend the book by Gallot, Hulin, and Lafontaine. I suspect that the book by Thurston is good too (I studied his notes when I was a student). For invariant Riemannian metrics on a homogeneous space, I recommend the book by Cheeger and Ebin (available cheap from AMS! When I was a student, I had to pay a hundred dollars for this little book but it was well worth it).



But mostly when I was learning this stuff, I did and redid the same calculations many times on my own. I was never able to learn much more than a bare outline of the ideas from either books or lectures. Just try to get a rough idea of what's going on from the books, but do the details yourself.

Monday, 28 January 2013

the sun - Does the sky get darker faster during the winter?

The further north you go, the time between sunset and darkness becomes longer, no matter the season. The reason is due to the velocity at that latitude. If there is 10 min of twilight on the equator, then there is $10sqrt{2}$ min at 45° latitude, 20 min at 60° latitude, ...



Added:
I used the website that @barrycarter listed above and discovered that there are 2 maxima and minima during the year. The solstices have the longest twilight times and the equinoxes have the shortest twilight times.

dg.differential geometry - Transition Functions of the Principal Bundle $SU(2) to mathbb{CP}^1$

I've been trying to understand principal bundles, and to that end have been looking at the bundle
$$
pi: SU(2) to mathbb{CP}^1,~~~ (a_{ij}) mapsto [a_{11},a_{21}],
$$
with fibre $U(1)$. I assumed that the bundle would be trivial over the standard nbds $U_1,U_2 subset mathbb{C}$, but can't seem to identify the local trivializations. Now
$$
pi^{-1}(U_1) = {left( array{a & - overline{b}\
b & overline{a}} right)|~ a neq 0}, ~~~ pi^{-1}(U_2) = {left( array{a & - overline{b}\
b & overline{a}} right)|~ b neq 0},
$$
and any trivialization $alpha_1:pi^{-1}(U_i) to U_i times U(1)$, will map
$$
alpha_1:left( array{a & - overline{b}\
b & overline{a}} right) mapsto ([a,b],h_{a,b}^1),
$$
for some $h_{a,b}^1$. Defining $h^1_{a,b} = arg(a) = frac{a}{|a|}$, and similarly $h^2$, works, but then the transition functions are not in $U(1)$.

pr.probability - statistical approach to multinomial distribution

The classical approach is to build a Neyman-Pearson style hypothesis test (warning: incredibly ugly mathematics, in desperate need of replacement, but ubiquitous).



Say you rolled your die $N$ times to produce $X$. Let the multinomial distribution have parameters $(p_1, p_2, ..., p_6)$, where $sum_i p_i = 1$. Then construct a one dimensional measure such as $Q = | X/N - p |$, using your favorite $p$-norm. Calculate the probability distribution of $Q$.



Your null hypothesis in this case is $p_i = frac{1}{6}$ for all $i$. For a test of level of significance $alpha$ (conventionally 0.05 or 0.01), there is a region $[a,b]$ such that $int_a^b p(Q = x) dx = 1 - alpha$. Actually, there are many such, and there are other criteria to choose among them. In your case, invariance might be a good one: you expect the whole problem to be symmetric if you let $Q$ go to $-Q$, in which case the interval should be symmetric about 0, i.e., $[-a,a]$.



For a given value of $Q$ from your data, you do the integral over $[-Q,Q]$ and get $1 - alpha$. That $alpha$ is the lowest level of significance at which the observed data will be significant.



As I said, classical hypothesis testing is a very ugly theory. There are other approaches, such as minimax tests which you can construct via Bayes priors, since the set of all Bayes priors contains but is usually not much larger than the set of all admissible statistical procedures.

ag.algebraic geometry - Relative minimality for conic bundles

You might consult



http://hal.archives-ouvertes.fr/docs/00/46/16/98/PDF/modelsgeomratcompositio.pdf



Your own definition seems a little confusing. A conic bundle is a map whose fibers are conics. Of course conics are embedded objects so this requires some kind of definition of a conic structure on the fibers. Plane conics come in three versions, irreducible and smooth, two distinct lines, and a double line. Usually a conic bundle has only the first two types and the locus in the target over which fibers are two lines is called the discriminant.



Thus for a conic bundle over a surface, the discriminant would be the curve in that surface over which the fibers are reducible, rather than irreducible. However if that curve is empty then both statements are true. Indeed in the cited reference, minimal [complex] conic bundles are said to be those with empty discriminant curve. That paper studies real conic bundles for which the term minimal is said to apply to those with imaginary discriminant curve.



If instead of requiring an embedding inducing the structure of conic on the fibers, one makes a definition that the fibers are only abstractly isomorphic to conics (as in this reference), then i suppose you could blow up the source threefold along a curve meeting each fiber at most once, and change a curve of irreducible fibers into reducible ones. That would seem to be a "non minimal" object you would want to exclude?



A general reference on [conic and] quadric bundles is Beauville's paper "Varietes de Prym et Jacobians intermediaire", in Ann. Sci. de l'Ecole Normale Sup., (4) 10 (1977), no.3, p.309 ff.



The concept of relatively minimal variety cited elsewhere here, seems related to minimal model theory, and hence presumably to the sort of bad example I proposed. As to the definition of minimal in the paper first cited in this answer, it seems not to be a sort that can always be achieved by modifications. I.e. it seems the threefold could be minimal and yet the discriminant curve still has real points. I am far from expert on this.

Sunday, 27 January 2013

pr.probability - approximate a probability distribution by moment matching

Suppose we want to approximate a real-valued random variable $X$ by a discrete random variable $Z$ with finitely many atoms. Suppose all moments of $X$ is finite. We want to match the moments of $X$ up to the $m^{rm th}$ order:



(1) $mathbb{E}[X^k] = mathbb{E}[Z^k]$ for $k = 1, ldots m$.



Here is a positive result, which is a simple consequence of convex analysis (Caratheodory's theorem): there exists $Z$ with at most $m+1$ atoms such that (1) holds.



Here are my questions:
1) Is there a converse result about this? Say $X$ has an absolutely continuous distribution supported on $mathbb{R}$ (e.g. Gaussian). When $m$ is large, given that $Z$ has only $m$ atoms, can we conclude that we cannot approximate all $2m$ moments of $X$ well, i.e., can we lower bound the error
$max_{1 leq k leq 2m}|mathbb{E}[X^k] - mathbb{E}[Z^k]|$? My intuition is the following: for a Gaussian $X$, $mathbb{E}[X^k]$ grows like $k^{frac{k}{2}}$ superexponentially. When we find a $Z$ who matches all moments of $X$ up to $m$, it cannot catch up with higher-order moments $X$; if $Z$ matches all moments from $m+1$ up to $2m$, then its low-order moments will be quite different from $X$.



2) Is there an efficient algorithm to compute the location and weights of the approximating discrete distribution? Does there exist a table to record these for approximating common distribution (e.g. Gaussian) for each fixed $m$? It could be very handy...



3) I heard from folklore that when (1) holds, the total variation distance between their distributions can be upper bounded by, say, $e^{-m}$ or $1/m!$. Of course, this won't be true for a discrete $Z$. But let's say $X$ and $Z$ both has smooth and bounded density on $mathbb{R}$. Could this be true? Now two characteristic functions matches at $0$ up to $m^{rm th}$ derivatives. They should be pretty close?

the moon - Meteorites bring water

The latest information suggests that Earth's water came from Meteorites . The Moon was also bombarded by Meteorites and yet it has no water. Is this because it lacks an atmosphere and its water was lost in space ?



On further thought on this matter, it occurred to me that Mars has an atmosphere and yet it has no oceans either, just like the Moon . So Mars I assume was bombarded by meteorites, just like the Earth was and it has a stronger gravity than the Moon, yet it doesn't have oceans either ? So what happened ?

Is Mars expected to go through the tail of Comet Siding?

Short answer: Sort of



Long answer:



One week before the encounter, Wikipedia says no. The comet's nucleus will pass by Mars at a distance somewhere of about 139,000 km from the center of Mars - way too far for any predicted collision. The main tail of the comet, too, will most likely miss Mars by about "10 Mars diameters" - roughly 64,000 km. However, small quantities of dust particles could (and most likely will) stray away from the main tail and hit Mars, Deimos and Phobos. This graphic gives a good picture of the encounter:



enter image description here



However, Mars may pass through the comet's coma and be hit by any dust particles there.



It will certainly be observable from the Martian surface: The comet will have an apparent magnitude of -6 at its peak. It will be quite a show, but nothing extraordinarily special.



The damage to orbiting spacecraft around Mars will likely be minimal. 90 minutes after the comet passes Mars, the worst of the dust particles will hit the spacecraft. The tiny barrage will only last about 20 minutes, though, and there is a low probability of damage. In fact, the various orbiters will take pictures of the comet and try to analyze it to learn more about its properties.



I hope this helps.



Other sources:



http://mars.nasa.gov/comets/sidingspring/



https://en.wikipedia.org/wiki/Mars



Update:



As of about an hour after the encounter, Mars still exists. Read this article for some not-so-juicy details.

Saturday, 26 January 2013

ag.algebraic geometry - Blow up along codimension one closed subscheme

I think it's not true :



Let $X=Spec(A)$ with $A=k[x,y,z]/(x^2-y^2-z^2)$ be a quadratic cone. Let $Y$ be a line through the origin of the cone : its ideal is $I=(z,x-y)$. We calculate :



$$X'=Proj_{A}A[t,u]/(zt-(x+y)u,(x-y)t-zu),$$ [EDIT : THE FORMULA HAS BEEN CORRECTED]



where, in the graded $A$-algebra $A+I+I^2+....$ we denoted $t$ and $u$ the degree one generators corresponding to $z$ and $x-y$.
Now, quotienting by $x$, $y$, and $z$, we calculate the fiber over the origin of this blow-up It is Proj(k[t,u]), which is a positive-dimensional projective line !

Friday, 25 January 2013

earth - How to calculate right ascension of Greenwich?

Right Ascension is a celestial coordinate, not a terrestrial one.



RA is used to locate a star on sky, and is measured between a fixed point in the sky (the crossing of the Ecliptic and the Celestial Equator) and the star's meridian.



It corresponds to terrestrial coordinate Latitude.



Thus, Greenwich has no Right Ascension.

dark energy - How can light reach us from 14 billion light years away?

One thing that I can't quite wrap my head around is how light is traveling to Earth from 14 billion light years away, aka the beginning of the universe.



The way I see it, the universe itself was very small 14 billion years ago and has been steadily accelerating in size since. So the photons that left a star that existed 14 billion years ago couldn't travel for 14 billion years because the universe itself was probably only a few million light years total in diameter. The universe is not expanding faster than the speed of light, so that photon should have hit the end of the universe before now.



What piece of this puzzle am I missing?

Can all areas on Earth experience a total solar eclipse?

A solar eclipse only puts part of Earth at a time in shadow, so I'm not talking about a solar eclipse that casts a shadow over all of Earth.



What I mean is: are there any areas on Earth where, no matter how Sol, Earth and Luna are aligned, a total solar eclipse is impossible? Can the penumbra cast a shade on any part of Earth, or are there certain areas where a penumbra due to a solar eclipse cannot occur?

ag.algebraic geometry - Does formally etale imply flat for noetherian schemes?

This is a followup to an earlier question I asked: Does formally etale imply flat? After some remarks I received on MO I noticed that this was answered to the negative by an answer to an earlier question Is there an example of a formally smooth morphism which is not smooth. However, the simple example involves a non-noetherian ring (in fact, a perfect ring; these are seldom noetherian unless they are a field).



So my challenge is to provide an example of a formally etale map of noetherian schemes which is not flat, or otherwise proof that for maps of noetherian schemes formally etale implies flat.

Tuesday, 22 January 2013

fa.functional analysis - Local supporting points of Lipschitz functions

Let X be a separable reflexive Banach space and f:Xtomathbb{R} be a
Lipschitz function. Say that a point x in X is a local supporting point
of
f if there exist x^* in X^* and an open neighborhood U of x
such that either x^* (y-x)leq f(y)-f(x) for all y in U or
x^* (y-x)geq f(y)-f(x) for all y in U.



Question: is true that the set of local supporting points of f is dense in X?



This question is obviously related to differentiability; it might be difficult.



I would be very much interested to know whether it has been asked by others.

Monday, 21 January 2013

big list - Books you would like to see translated into English.

[Another answer contains this suggestion, but it's at the end of the answer and no details are given.]



I would rather like to read Kostrikin's Introduction to Algebra (the 2nd edition, published in 2000: Кострикин – Введение в алгебру). It is in 3 volumes: 'Basic algebra', 'Linear algebra', and 'Fundamental structures of algebra'. Approximately, they cover:



I – preliminaries, matrices & determinants, basics of groups rings & fields, complex & real polynomials



II – vector spaces & linear operators, euclidean hermitian affine & projective spaces, tensors



III – structures of various groups, basic representation theory, rings modules & algebras, Galois theory



The book begins with a discussion about what algebra is, a historical overview, and a set of substantial problems that can be solved with algebra as motivation. Each volume contains a number of figures (67 in total), many applications, and a discussion of open problems (e.g. the convergence of Newton's method, finite projective planes, the inverse Galois problem).



From the Zentralblatt review: "The distinguishing features of the book are the following ones: 1) clearness, clarity and compactness of exposition; 2) the concentric style of presentation; 3) variety of skilfully selected examples (from very simple to very complex ones)."



[Note that the 1st edition was translated, but it is about a third as long and covers far less.]

amateur observing - Time after sunset until star can be seen

Given a star's apparent location and apparent magnitude, how how many degrees below the horizon must the sun be for the star to be visible to the naked eye for an observer on Earth at a specified location and elevation? You can ignore all light pollution, though I understand there will be some error due to air quality/weather/atmospheric aberrations.



Is there such a formula? If not, why not and what methods of approximation are available?



(I figure the brightness of the afterglow of the sun at a certain angle above the western horizon can be calculated for a certain time and place, and if it is less than the apparent brightness of the star in question, the star will be able to be seen.)

Sunday, 20 January 2013

big bang theory - How is the universe bordered?

The models we're talking about are in the Friedmann–Robsetson–Walker family of solutions of general relativity, which are used because they have the properties of being spatially homogeneous, i.e. they are the same at every point in space, and also isotropic, i.e. every direction is equivalent.



There are many possible finite yet boundaryless spatial geometries, and I recommend zibadawa timmy's answer for a simple illustration of a toroidal two-dimensional universe and the accompanying analogy of the classic arcade game Asteroids. It is probably the most straightforward illustration that being finite does not imply having a border while at the same time making it intuitive that we don't need to actually consider a higher-dimensional space to embed it it in, because the rules of Asteroids don't actually need an extra dimensions. Such embeddings are just convenient visualizations.



However, where the toroidal universe differs from the FRW models is that it's not isotropic. You can see this by the fact that on a torus, going in some directions will get you back where you started, whereas going in others will wind you endlessly along the torus, never quite making it exactly where you've started at. Thus, not all directions behave the same way.



There are only four kinds of spatial geometries for the three-dimensional space of our universe that are homogeneous and isotropic: the Euclidean space $mathbf{E}^3$, the hyperbolic space $mathbf{H}^3$, the sphere $mathbf{S}^3$, and the projective real space $mathbf{RP}^3$, the last of which is like a sphere but with a different global topology.




So it's some kind of 4 dimension sphere (sphere in any direction you go in 3D) if the universe is finite?




A three-dimensional sphere, actually. Probably the most important intuitive leap here is that any particular embedding, or even whether there exists any embedding, is completely irrelevant. The surface of an ordinary beach-ball is actually a two-dimensional sphere, and as a manifold it makes sense whether or not we think of it a surface of some three-dimensional object. When we're talking about the universe, an embedding is a tool for visualization, not reality.



The universe could be a $3$-sphere, which we could think of as the surface of a ball in flat four-dimensional space, but there is no need to. That four-dimensional space wouldn't be part of our universe (not in general relativity, anyway), and such embeddings aren't unique. Actually, for completely general spacetime manifolds (i.e. if we try to embed spacetime in flat higher-dimensional spacetime), they're not even guaranteed to exist, which is a good thing because we don't need them--all our measurements are within the universe.

set theory - Behavior of externally-infinite elements in ultrapowers of $langle HF,epsilonrangle$

As I pointed out in the comments, $(HF,{in})$ is biinterpretable with $mathbb{N}$, which means that the corresponding ultrapowers are biinterpretable too. So you will find all you need in the vast literature on nonstandard arithmetic.



A nice interpretation of $(HF,{in})$ in $mathbb{N}$ is given by defining $m in n$ if the $m$-th binary digit of $n$ is $1$. For example, here are the first few coded sets:



  • $0$ codes $varnothing$

  • $1 = 2^0$ codes ${varnothing}$

  • $2 = 2^1$ codes ${{varnothing}}$

  • $3 = 2^0 + 2^1$ codes ${varnothing,{varnothing}}$

You can similarly interpret the ultrapower $HF^omega/mathcal{U}$ in the ultrapower $mathbb{N}^{omega}/mathcal{U}$. If $bar{m},bar{n} in mathbb{N}^omega$, the $bar{m}$-th binary digit of $bar{n}$ is first interpreted term-by-term giving a sequence $bar{b} in {0,1}^omega$ where each $b_i$ is the $m_i$-th binary digit of $n_i$. In $mathbb{N}^omega/mathcal{U}$, $bar{b}$ is evaluated as either $0$ or $1$, depending on which value occurs $mathcal{U}$-often. This value tells you whether $bar{m} in bar{n}$ according to the above interpretation.



Of course, you can simply compute things directly. Given sequences $bar{x},bar{y} in HF^omega$, we have $bar{x} in bar{y}$ in the ultrapower $HF^omega/mathcal{U}$ if and only if ${i : x_i in y_i } in mathcal{U}$. This gives exactly the same structure as interpreting sets in $mathbb{N}^omega/mathcal{U}$ as described above.




It just occurred to me that you may be looking for a more set-theoretic description of the nonstandard elements of $HF^omega/mathcal{U}$.



The wellfounded part of $HF^omega/mathcal{U}$ is precisely $HF$ and no more. A sequence $bar{x} in HF^omega$ will represent a wellfounded set in $HF^omega/mathcal{U}$ if and only if it has bounded rank mod $mathcal{U}$, i.e. ${i : mathrm{rk}(x_i) < n} in mathcal{U}$ for some $n < omega$, in which case it will be constant mod $mathcal{U}$ since there are only finitely many sets of rank less than $n$. If this is not the case, then $langlemathrm{rk}(x_i)rangle$ evaluates to a nonstandard ordinal $N$ in $HF^omega/mathcal{U}$. This means that in $HF^omega/mathcal{U}$, we can (externally) find an infinite descending ${in}$-chain starting with the evaluation of $bar{x}$. So it is impossible to describe the evaluation of $bar{x}$ as a real set.



Without peering into the depths of the nonstandard ${in}$ relation, there is not much to elements of $HF^{omega}/mathcal{U}$. Every element of $HF^omega/mathcal{U}$ has a bijection with a (possibly nonstandard) ordinal, so it looks exactly like an internal initial segment of the nonstandard ordinals.

soft question - Memorizing theorems

I tend to agree that, nine times out of ten, memorizing theorems is a Bad Thing. The best goal (IMO) is to get comfortable enough in your area and at your level that you can intuit whether or not some statement is true without necessarily knowing a proof (or an ad-hoc counterexample). Of course, you also want to know the best tools in your area, but that often comes with understanding it on a deeper level.



That said, yeah, there are often some technical theorems that are useful as black boxes (examples: Wagner's/Kuratowski's theorem in graph theory, the classification of finite simple groups, maybe structure theorems in closely related areas to the one in which you work), but if you use them often enough that it's an actual hassle to look up their statements, and they're reasonably simply stated, you'll probably memorize them through sheer force of habit.



So how do you get a feel for your corner of mathematics? Well, you just do math. When you read a paper, try to stay a step or two ahead of the author. (I don't remember who said this -- maybe it was in Terry Tao's excellent career advice page -- but this is a can't-lose proposition; if you correctly predict what they're about to do, then you get to feel the satisfaction of being right, and if you incorrectly predict what they're about to do, you get to see something unexpected and new.) Work textbook problems that don't give you the answers. Ask your own questions and try to answer them; if you can't answer them yourself, go to the literature! (Or here, although preferably after the literature.)



Maintain a list of motivating examples and counterexamples in your area. For instance, I think a lot about graph theory; if I want to see if a conjecture holds for all graphs, one of the first things I'll do is ask whether it's true for a random graph. Next, I might ask if it's true for the Petersen graph, for the 5-cycle, for complete graphs or for trees, or for sparse random graphs (a.k.a. expanders, for all intents and purposes.) If I can prove my conjecture -- or even give a heuristic argument -- in these special cases, then I can start wondering if it holds in general.



Try to understand more than one way to approach your subject; not everyone who's worked in it thought about things the same way, and you should be flexible to accommodate their intuitions. Going back again to graph theory, there are several different ways to view a graph. The simplest (and most standard) is as a symmetric binary relation on a set. But you can also think of it geometrically as "dots connected by lines," or topologically as the 1-skeleton of a simplicial complex -- not coincidentally, these two definitions are closely related. Algebraically, you can think of a graph as a certain kind of groupoid, which is closely related to its definition as a symmetric matrix. (This is actually also related to the topological/geometric definition, although less obviously -- the groupoid is a discrete version of the fundamental groupoid of a space.) A separate algebraic approach is to think of graphs as "generalized Cayley graphs," which seems silly but actually pays off big-time when you work with graph products. In computer science, the best way to represent a sparse graph is often as an adjacency or incidence list; this formulation is very often useful in algorithms.

nt.number theory - Irreducible polynomial over number field with roots in every completion?

Edit: I see I missed the note that $K$ might not be finite over $mathbb{Q}$. This answer is not correct for $K$ of infinite degree over $mathbb{Q}$, as in FC's answer above.



No. This is a consequence of the Chebotarev density theorem. To see how it follows, look at exercise 6 at the end of Cassels and Frohlich's "Algebraic Number Theory".



Briefly, the Chebotarev density theorem says that for a Galois extension of global fields $L/K$ and for a finite set $S$ of places of $K$, the proportion of primes of $K$ splitting in $L$ is $1/[L:K]$. If $G=text{Gal}(L/K)$ and $E$ is the fixed field of some $Hsubset G$, it is possible to show that the proportion of places of $K$ with a split factor in $E$ is $|bigcup_{rhoin G}rho Hrho^{-1}|/|G|$, and a lemma on finite groups says that this quotient is not $1$ unless $H=G$.



In your case, take $E=K[x]/(f)$ and $L$ to be a normal extension of $K$ containing $E$. Then "$v$ has a split factor" means "$f$ has a root in the completion $K_v$". If $f$ has a root in each completion (or even a set of completions with density $1$, which includes the case "all but finitely many"), we must have $H=G$ and $E=K$. So $f$ already had a root in $K$.

soft question - A website linking to most major math journals?

As a slight variation on the "Google" answer, Google Scholar sometimes actually works better. For some searches, the results found by Scholar will be surfaced to the ordinary google.com search results, but this doesn't always work as well as it should so it's best to explicitly try the Scholar version as well.

Saturday, 19 January 2013

Set-theoretic foundations for formal language theory?

Hi Anthony,



The question you're asking in your text isn't quite the same as the question in your title, but the answer to the title subsumes the one you ask in the text.



To define a grammar, we start with a an alphabet $Sigma$, and a set of nonterminal variable $N$, and then define a grammatical expression as an element of the formal semiring $G$ over $Sigma + N$. (It's a good exercise to see how the semiring axioms let you convert grammatical expressions into their Backus normal form.) Next, a grammar is a map from $N to G$ -- that is, it assigns a grammatical expression to each variable.



Now, consider the free monoid over $Sigma$. Concretely, these are sequences of characters in $Sigma$. A language is a set of these strings, and our goal is to give an interpretion sending nonterminals to languages. If we had a map $f$ sending nonterminals to languages, we could interpret grammatical expressions inductively, by interpreting the formal multiplication in the ring as concatenation (using the monoid operation) of elements in each language, and interpreting the formal sum as set union of languages.



Now, consider the free monoid over $Sigma$. Concretely, these are sequences of characters in $Sigma$. A language is a set of these strings (ie, an element of $mathcal{P}(Sigma^{*})$), and our goal is to give an interpretion sending nonterminals to languages. If we had a map $f$ sending nonterminals to languages, we could interpret grammatical expressions inductively, by interpreting the formal multiplication in the ring as concatenation (using the monoid operation) of elements in each language, and interpreting the formal sum as set union of languages.



If we have such an interpretation (a function $N to mathcal{P}(Sigma^{*})$) and a grammar (a map $N to G$), we can lift the grammar to an endofunction on interpretations -- that is, to a function $(N to L) to (N to L)$. (I'm using $L$ for a set of strings due to jsMath flakiness -- it should be powerset-sigma-star.)



The language for each nonterminal will be the least fixed point of this functional. (As a technical detail, to make this functional monotone, so that the Knaster-Tarski theorem applies, you need to ensure that each interpretation also unions the new language with the old argument language -- ie, you take an interpretation $i$ sketched above and change it to $lambda f.;lambda n.; f(n) cup i(n)$.)



I should also add that this is all standard material, which should appear in every textbook on formal language theory. (I'm pretty sure it's in Sipser.)

nt.number theory - What are Mean Values of Ideal Densities in Galois Extensions?

In an unfinished (and as of now unpublished) article intended for the encyclopedia of mathematics, Arnold Scholz wrote:



"Classifying extensions according to the Galois group
of their normal closure provides us with a new point
of view. Not only the minimal discriminants but also
the mean values of the ideal densities differ considerably,
and have the following values for discriminants with large
prime factors:



  • $sqrt{zeta(2)}$ for quadratic extensions;

  • $sqrt[3]{zeta(3)^2}$ and $sqrt{zeta(2)}sqrt[3]{zeta(3)}$
    for a cubic extension according as it is cyclic or noncyclic;

  • $sqrt{zeta(2)}sqrt{zeta(4)}$ and $sqrt{zeta(2)}^3$
    for cyclic and biquadratic quartic extensions, respectively."

I'd like to know what Scholz is talking about here. Ideal density might be some limit of the form "number of ideals with norm $le x$" / $x$, and mean value should denote some average over number fields. But what exactly is Scholz doing here?



Edit. Apparently (this is suggested by some remarks he made elsewhere), Scholz called the expression
$$ prod_p phi(p^n)/Phi_K(p) $$
the ideal density of a number field $K$, where $phi$ and $Phi_K$ denote Euler's phi function in the rationals and in $K$, respectively, and where $n$ denotes the degree
of $K$. This expression occurs in the product formula for the zeta function. I still don't know where to go from here.



As for Robin's remark on the density of fields ordered by discriminants, Scholz claimed, in a letter to Hasse dated Sept. 27, 1938, the following: The Dirichlet series
$$ G(s) = sum_{Gal(K)=G} D_K^{-s}, $$
where the sum is over all quartic fields whose normal closure has Galois group $G$, have abscissa of convergence $alpha(D)=1$, $alpha(Z) = alpha(V) = frac{1}{2}$ and
probably $alpha(S)=1$, $alpha(A)=frac{1}{2}$, where $D$, $Z$, $V$, $A$, $S$ denote
the dihedral, cyclic, four, alternating and symmetric group. Moreover,
$$ lim_{s to 1/2} frac{Z(s)}{V(s)} = 0, $$
where $Z(s)$ and $V(s)$ are the Dirichlet series defined above for $G=Z$ and $G=V$. This is all correct, as we know now, but how could Scholz have discovered (and, for $G = D$, $Z$, $V$, proved) these results in the 1930s?

Friday, 18 January 2013

exoplanet - Colossus telescope, trying to outsmart aliens?

I was listening to Jeff Kuhn's talk
on SETI's Colossus telescope project.



Background:



He explains his theory that a civilization living somewhere the galaxy would want to hide, and thus would shortly learn to avoid broadcasting radio signals all over the galaxy, so as to avoid attracting attention (presumably to avoid inter-stellar predators).



Then he comes up with a great idea; due to the second law of thermodynamics, they would have to dissipate heat-energy, and thus, one can detect them anyway, because they have no way to hide this heat-energy. This, particularly when they are a Type I civilization, and nearing Type II; in this case, to avoid global warming, they would capture and use almost all solar energy, and emit an equivalent amount heat-energy.



Thus he proposes a telescope to focus on single stars, which would be capable of detecting this heat-energy (by the distinctive infrared light emmited).



My question:



If a civilization was smart enough to hide all of its radio signals, wouldn't they know that they cannot avoid emitting heat-energy? Thus their predators would be able to find them anyway, and thus, it is pointless to hide the radio signals?



Therefore, we should already be seeing their radio signals, and this telescope is being built for a theory with some holes.



Note: I am not suggesting that the telescope shouldn't be built; I think it could have other very valuable uses, and it would also be interesting to eliminate the theory if it is indeed incorrect (or prove it correct ... somehow). I am just asking if there might be justifications for the theory, in light of my point-in-question, which I am missing.

solar system - Asteroids between Mars and Jupiter

Many models shown in books or television show a very populated asteroid belt but in fact the belt is mostly empty.



To answer your question, the inclination of the asteroids vary a lot going from 0° to 40° although most off them are in between 0° and 30°; See The orbital element distributions of real and modelled asteroids. So yes it would be 3 dimensional.

Thursday, 17 January 2013

mp.mathematical physics - Statement of Millenium Problem: Yang-Mills Theory and Mass Gap

The term "Yang-Mills theory" in the mass gap problem refers to a particular QFT. It is believed that this QFT (meaning its Hilbert space of states and its observable operators) should be defined in terms of a measure on the space of connections on $mathbb{R}^4$; roughly speaking, the moments of this measure are the matrix elements of the operator-valued distributions. (People also use the term "gauge theory" to refer to any QFT, like QCD, which has a Yang-Mills sub-theory.)



The mass gap problem really has two aspects: First, one has to construct an appropriate measure $dmu$ on some space of connections. Then, one has to work out which functions on the space of connections are integrable with respect to this measure, and show that the corresponding collection of operators includes an energy operator (i.e. a generator of time translations) which has a gap in its spectrum.



You'll have to read the literature to really learn anything about this stuff, but I can make a few points to help you on your way. Be warned that what follows is a caricature. (Hopefully, a helpful one for people trying to learn this stuff.)



About the Measure:



First, the measure isn't really defined on the space of connections. Rather, it should be defined on space $mathcal{F}$ of continuous linear functionals on the space $mathcal{S}$ of smooth rapidly-vanishing $mathfrak{g}$-valued vector fields on $mathbb{R}^4$, where $mathfrak{g}$ is the Lie algebra of the gauge group $G$. The space ;$mathcal{F}$ contains the space of connections, since any connection on $mathbb{R}^4$ can be written as $d$ plus a $mathfrak{g}$-valued $1$-form and paired with a vector field via the Killing form, but it also has lots of more "distributional" elements.



We're supposed to get $dmu$ as the "infinite-volume/continuum limit" of a collection of regularized measures. This means that we are going to write $mathcal{S}$ as an increasing union of chosen finite-dimensional vector spaces $mathcal{S}(V,epsilon)$; these spaces are spanned by chosen vector fields which have support in some finite-volume $V subset mathbb{R}^4$ and which are slowly varying on distance scales smaller than $epsilon$. (You should imagine we're choosing a wavelet basis for $mathcal{S}$.) Then we're going to construct a measure $dmu_hbar(V,epsilon)$ on the finite dimensional dual vector space $mathcal{F}(V,epsilon) = mathcal{S}(V,epsilon)' subset mathcal{F}$; these measures will have the form $frac{1}{mathcal{Z}(V,epsilon,hbar)} e^{-frac{1}{hbar}S_{V,epsilon,hbar}(A)}dA$. Here, $dA$ is Lesbesgue-measure on $mathcal{F}(V,epsilon)$, and $S_{V,epsilon,hbar}$ is some discretization of the Yang-Mills action, which is adapted to the subspace $mathcal{F}(V,epsilon)$. (Usually, people use some version of Wilson's lattice action.)



Existence of the Yang-Mills measure means that one can choose $S_{V,epsilon}$ as a function of $V$,$epsilon$, and $hbar$ so that the limit $dmu_hbar$ exists as a measure on $mathcal{F}$ as $vol(V)/epsilon to infty$. We also demand that the $hbar to 0$ limit of $dmu_hbar$ is supported on the space of critical points of the classical Yang-Mills equations. (We want to tune the discretized actions to fix the classical limit.)



About the Integrable Functions:



Generally speaking, the functions we'd like to integrate should be expressed in terms of the "coordinate functions" which map the $A$ to $A(f)$, where $f$ is one of the basis elements we used to define the subspaces $mathcal{S}(V,epsilon)$. You should imagine that $f$ is a bump vector field, supported near $x in mathbb{R}^4$ so that these functions approximate the map sending a $mathfrak{g}$-valued $1$-form to the value $A_{i,a}(x)$ of its $(i,a)$-th component.



There are three warnings to keep in mind:



First, we only want to look at functions on $mathcal{F}$ which are invariant under the group of gauge transformations. So the coordinate functions themselves are not OK. But gauge invariant combinations, like the trace of the curvature at a point, or the holonomy of a connection around a loop are OK.



Second, when expressing observables in terms of the coordinate functions, you have to be careful, because the naive classical expressions don't always carry over. The expectation value of the function $A mapsto A_{i,a}(x)A_{j,b}(y)$ with respect to $dmu_hbar$ (for $hbar neq 0$) is going to be singular as $x to y$. This is OK, because we were expecting these moments to define the matrix elements of operator-valued distributions. But it means we have to be careful when considering the expectation values of functions like $A mapsto A(x)^2$. Some modifications may be required to obtain well-defined quantities. (The simplest example is normal-ordering, which you can see in many two-dimensional QFTs.)



Finally, the real problem. Yang-Mills theory should confine. This means, very very roughly, that there are some observables which make sense in the classical theory but which are not well-defined in the quantum theory; quantum mechanical effects prevent the phenomena that these observables describe. In the measure theoretic formulation, you see this by watching the expection values of these suspect observables diverge (or otherwise fail to remain integrable) as you approach the infinite-volume limit.



About the Operators:



In classical Yang-Mills theory, the coordinate observables $A mapsto A_{i,a}(x)$ satisfy equations of motion, the Yang-Mills equations. Moreover, in classical field theory, for pure states, the expectation value of a product of observables $mathcal{O}_1mathcal{O}_2...$ is the product of the individual expectation values. In quantum YM, the situation is more complicated: coordinate observables may not have be well-defined, thanks to confinement, and in any case, observables only satisfy equations of motion in a fairly weak sense: If $mathcal{O}_1$ is an expression which would vanish in the classical theory thanks to the equations of motion, then then the expectation value of $mathcal{O}_1mathcal{O}_2...$ is a distribution supported on the supports of $mathcal{O}_2...$.

Wednesday, 16 January 2013

set theory - Mostowski collapses and universal extensional relational classes

The answer to your first question is no, unfortunately, it
is not functorial.



Here is a comparatively simple counterexample. The idea is
that two objects in $A$ can collapse to the same point, but
gain new elements in $A'$ that differentiate them. Let
$A={a,b}$ have two objects, with $R$ the empty
relation. So the Mostowski collapse of $(A,R)$ maps both
$a$ and $b$ to the emptyset $varnothing$. Let
$A'={a,b,c}$, with $cmathrel{R'}b$, but otherwise $R'$
is trivial. So the Mostowski collapse of $(A',R')$ maps
$a,cmapstovarnothing$ and $bmapsto{varnothing}$.
Let $f:Ato A'$ be the inclusion map, which is a
homomorphism, since $a$ and $b$ have no relation with
respect to $R$ or $R'$. Since we have $G(a)= G(b)$ but
$G'(f(a))neq G'(f(b))$, there can be no commutative
diagram here.



But why not insistin on extensionality? In the category of extensional class relations, that is, if your structures $(A,R)$ and $(A',R')$ satisfy
extensionality, then the Mostowski collapse is functorial. This is because $G$
and $G'$ are isomorphisms, and it is easy to see that there
is the induced homomorphism $h:Mto M'$ defined by
$h(G(a))=G'(f(a))$. This is a homomorphism since $G(a)in
G(b)iff amathrel{R} biff f(a)mathrel{R'}f(b)iff
G'(f(a))in G'(f(b))$, and the diagram commutes by
definition.



For the second question, in this extensional case, since
$(M',{in})$ is isomorphic to $(A',R')$, then we get a map
$(M,{in})to (A',R')$, as you say. It is just the map
$hcirc G^{-1}$. In general, there is not a unique map from
$(M,{in})$ to $(A',R')$, however, since for example, when
$(A,R)$ is a well-order, then the Mostowski collapse is an
ordinal, but if $(A',R')$ is a longer ordinal, there could
be many homomorphisms of $(M,{in})$ to $(A',R')$, since
these are just the order-preserving maps. But this map will
be unique such that the diagram commutes, since it is
determined by those isomorphisms. (But I think you were not
actually asking this about the extensional case. I'm not
sure what you mean by "adding transitivity", since the
Mostowski collapse of any structure is a transitive set,
even when the relations are not extensional.)



If you really don't want extensionality, then it would also be sensible to restrict the homomorphism concept to functions $f:Ato A'$ which not only preserve the relation (in both directions), but which also respect equivalence of predecessors; this amounts to being well-defined on the Mostowski collapse. That is, in this category, we think of $langle A,Rrangle$ as a code for its Mostowski collapse, and the notion of homomorphism should respect that. In this case, the Mostowski collapse will again be functorial.



Lastly, let me mention that several other restrictions of your homomorphism concept are often studied. Namely, in set theory the concept of a $Sigma_n$-elementary map is prominent for natural numbers $n$. Even $Sigma_0$-elementary goes beyond the basic concept of homomorphism that I think you intend, but we are often interested in $Sigma_1$-elementary embeddings or even fully elementary embeddings. Many large cardinal concepts, for example, are characterized by the existence of such embeddings defined on the entire universe into a transitive class.

Tuesday, 15 January 2013

What’s the object between the earth and the sun currently showing in Google maps?

That's the Moon alright, and it's definitely real and definitely there. If you go outside and look at the Sun right now, the Moon will be almost but not quite on top of it, though it's impossible to make out due to the Sun's glare. If it were any closer we would have had a solar eclipse around the time of the New Moon that took place 20 minutes ago. Two weeks from now at Full Moon, it'll be in even better alignment, though on the opposite side of the Earth from the Sun this time, which will result in a total lunar eclipse. Mark your calendar!

sg.symplectic geometry - cotangent bundle symplectic reduction and fibre bundles

This is more a comment towards Gourishankar than an answer to the original question.
It was part of my thesis, (UCB, about 1986), so, apologies, I chime in. For simplicity,
I take the case $G$ Abelian, and $H = $ trivial.
To map $J^{-1} (mu)$ equivariantly to $J^{-1}(0)$ subtract $mu cdot A$ where $A$ is any $G$-connection for $pi: X to X/G$. $J^{-1} (0)/G = T^* (X/G)$
canonically, independent of connection. The map `momentum shift map
of subtracing $mu cdot A$ from co-vectors is not symplectic,
relative to the standard structure, but it becomes symplectic if you subtract
$mu pi^* F_A$, where $F_A = curv(A)$, from the standard structure. So the reduced space at $mu$ is $T^*(X/G)$ with the standard structure minus the ``magnetic term''
$mu F_A$.



For non-Abelian $G$ ($H$ still trivial), it is easier to explain things in Poisson terms.
$T^* X/G$ is a Poisson manifold whose symplectic leaves
are the reduced spaces in question. The momentum
shift trick turns it into $T^* (X/G) oplus Ad^* (X)$ where $Ad^* (X) to X/G$
is the co-adjoint bundle associated to $X to X/G$ -- its fibers are the
dual Lie algebras for $G$. This direct sum bundle admits coordinates $s_i, p_i, xi_a$ where $s_i, p_i$ are canonical coordinates on $T^*(X/G)$ induced by coordinates $s_i$ on $X/G$
and where $xi_a$ are fiber-linear coordinates on the co-adjoint bundle induced
by a choice of local section of $pi$. Then the main tricky part of the bracket is that
the bracket of $p_i$ with $p_j$ is $Sigma xi_a F^a _{ij}$, $F$ being the curvature of the connection
relative to the choice of local section. The symplectic leaves = reduced spaces
are of the form $T^*(X/G) oplus $(co-adjoint orbit bundle).

algebraic k theory - $K_0$ of a non-separated scheme

This question is on "computing" the Grothendieck group of the projective $n$-space with $m$ origins ($mgeq 1$). For any (noetherian) scheme $X$, let $K_0(X)$ be the Grothendieck group of coherent sheaves on $X$.



Firstly, let me sketch that $K_0(mathbf{P}^n) cong K_0(mathbf{P}^{n-1})oplus K_0(mathbf{A}^n)$.



Let $X=mathbf{P}^n$ be the projective $n$-space.



(I omit writing the base scheme in the subscript. In fact, you can take any noetherian scheme as a base scheme in the following, I think.)



Let $Hcong mathbf{P}^{n-1}$ be a hyperplane with complement $Ucong mathbf{A}^n$. By a well-known theorem on Grothendieck groups, we have a short exact sequence of abelian groups $$K_0(H) rightarrow K_0(X) rightarrow K_0(U) rightarrow 0.$$ Now, let $i:Hlongrightarrow X$ be the closed immersion. Then the first map in the above sequence is given by the "extension by zero", which in this case is just the K-theoretic push-forward $i_!$, or even better, just the direct image functor $i_ast$. Now, there is a projection map $pi:Xlongrightarrow H$ such that $picirc i = textrm{id}_{H}$.



By functoriality of the push-forward, we conclude that $pi_! circ i_ast = pi_! circ i_! = textrm{id}_{K_0(H)}$.



Therefore, we may conclude that $i_ast$ is injective and that we have a split exact sequence $$0 rightarrow K_0(H) rightarrow K_0(X) rightarrow K_0(U) rightarrow 0.$$ Thus, we have that $K_0(mathbf{P}^n) cong K_0(mathbf{P}^{n-1})oplus K_0(mathbf{A}^n)$.



Q1: Let $mathbf{P}^{n,m}$ be the projective $n$-space with $m$ origins ($mgeq 1$). For example, $mathbf{P}^{n,1} = mathbf{P}^n$. (Again the base scheme can be anything, I think.) Now, is it true that $$K_0(mathbf{P}^{n,m}) cong K_0(mathbf{P}^{n-1,m}) oplus K_0(mathbf{A}^n)?$$



Idea1: Take a hyperplane $H$ in $mathbf{P}^{n,m}$. Is it true that $Hcong mathbf{P}^{n-1,m}$ and that its complement is $mathbf{A}^n$? Also, even though the schemes are not separated, the closed immersion $i:Hlongrightarrow mathbf{P}^{n,m}$ is proper, right? Also, is the projection $pi:mathbf{P}^{n,m}rightarrow H$ proper? If yes, the above reasoning applies. If no, how can one "fix" the above reasoning? I think that in this case one could still make sense out of $i_!$ and $pi_!$ (even if they are not proper maps.)



Idea2: Maybe it is easier to show that $K_0(mathbf{P}^{n,m}) cong K_0(mathbf{P}^{n-1})oplus K_0(mathbf{A}^{n,m})$, where $mathbf{A}^{n,m}$ is the affine $n$-space with $m$ origins. Then one reduces to computing $K_0(mathbf{A}^{n,m})$...



Idea3: One could also take $m=2$ as a starting case and look at the complement of one of the origins. Then we get a similar exact sequence as above and one could reason from there.



Which of these ideas do not apply and which do?



Note: Suppose that the base scheme is a field. Since $K_0(mathbf{A}^n) cong mathbf{Z}$ and $K_0(mathbf{P}^n) cong mathbf{Z}^{n+1}$, this would show that $$K_0(mathbf{P}^{n,m}) cong mathbf{Z}^{n+m}.$$ More generally, if $S$ is the base scheme, $K_0(mathbf{P}^{n,m}) cong K_0(S)^{n+m}$.

ag.algebraic geometry - Understanding the definition of the Lefschetz (pure effective) motive

For all those who are unlikely to have answers to my questions, I provide some





In some sense, pure motives are generalisations of smooth projective varieties. Every Weil cohomology theory factors through the embedding of smooth projective varieties into the category of pure Chow motives.



Pure effective motives



In the definition of pure motives (say over a field k), the last step is to take the category of pure effective motives and formally invert the Lefschetz motive L.



The category of pure effective motives is the pseudo-abelian envelope of a category of correspondence classes, which has as objects smooth projective varieties over k and as morphisms X → Y cycle classes in X×Y of dimension dim X (think of it as a generalisation of morphisms, where morphisms are included as their graphs), where an adequate equivalence relation is imposed, to have a well-defined composition (therefore the word "classes"). When the adequate relation is rational equivalence, the resulting category is called the category of pure effective Chow motives.
In each step of the construction, the monoidal structure of one step defines a monoidal structure on the next step.



For more background see Ilya's question about the yoga of motives.



Definition of the Lefschetz motive



The Lefschetz motive L is defined as follows:



For each point p in P¹ (1-dimensional projective space over k), there is the embedding morphism Spec k → P¹, which can be composed with the structural morphism P¹ → Spec k to yield an endomorphism of P¹. This is an idempotent, since the other composition Spec k → Spec k is the identity.
The category of effective pure motives is pseudo-abelian, so every idempotent has a kernel and thus [P¹] = [Spec k] + [something] =: 1+L, where the summand [something] is now named Lefschetz motive L.



Properties



The definition of L doesn't depend on the choice of the point p.



From nLab and Kahn's leçons I learned that the inversion of the Lefschetz motive is what makes the resulting monoidal category a rigid monoidal category - while the category of pure effective motives is not necessarily rigid.



In the category of pure motives, the inverse $L^{-1}$ is called T, the Tate motive.





These questions are somehow related to each other:




  1. Why is this motive L called "Lefschetz"?

  2. Why is its inverse $L^{-1}$ called "Tate"?

  3. Why is it exactly this construction that "rigidifies" the category?

  4. Would another construction work, too - or is this somehow universal?

  5. How should I think of L geometrically?



I have almost no background in number theory, so even if you have good answers, it may remain totally unclear to me, why the name "Tate" intervenes. I expect however, that the name "Lefschetz" has something to do with the Lefschetz trace formula. I guess that the procedure of inverting L is the only one which makes the category rigid, in some universal way, but I have no idea, why. In addition, I guess there is no "geometric" picture of L.



If I made any mistakes in the background section, feel free to edit. As I'm currently taking a first course on motives, I may now have asked something completely stupid. If this is the case, please point me politely to some document which will then enlighten me or, at least, let me ascend to a higher level of confusion.



UPDATE: Thanks so far for the answers, questions 1-4 are now clear to me. It remains, if the "rigidification" could be accomplished by another construction - maybe some universal way to turn a monoidal category into a rigid one? Then one could later identify the Lefschetz motive as some kind of a generator of the kernel of the rigidification functor.
The geometric intuition, to think of L as a curve and of $L^{otimes d}$ as a d-dimensional manifold, remains fuzzy, but I have the hope that this becomes clear when I have worked a little bit on the classical Lefschetz/Poincare theorems and the proof of Weil conjectures for Betti cohomology (is this hope justified?).

Monday, 14 January 2013

big list - What are the worst notations, in your opinion ?

My favorite example of bad notation is using $textrm{sin}^2(x)$ for $(textrm{sin}(x))^2$ and $textrm{sin}^{-1}(x)$ for $textrm{arcsin}(x)$, since this is basically the same notation used for two different things ($textrm{sin}^2(x)$ should mean $textrm{sin}(textrm{sin}(x))$ if $textrm{sin}^{-1}(x)$ means $textrm{arcsin}(x)$).



It might not be horrible, since it rarely leads to confusion, but it is inconsistent notation, which should be avoided in general.

Sunday, 13 January 2013

set theory - Adding a random real makes the set of ground model reals meager

The proof is based on the fact that there is a decomposition ${bf R}=Acup B$ of the reals such that $A$, $B$ are (very simple) Borel sets, $A$ is meager, $B$ is of measure zero, and ${bf R}=Acup B$ even holds if after forcing we reinterpret the sets. Nos let $s$ be a random real. If $rin {bf R}$ is an old real, then $snotin r+B$, so $sin r+A$, that is, the meager
$s-A$ contains all old reals.

nt.number theory - An everywhere locally trivial line bundle

The following example was provided to me by Colliot-Thélène some years ago : Let $X$ be the complement in $mathbb{P}_{1,mathbb{Q}}$ of the three closed points defined by $x^2=13$, $x^2=17$, $x^2=221$. Then $operatorname{Pic}(X)=mathbb{Z}/2mathbb{Z}$ but $operatorname{Pic}(X_v)=0$ for every place $v$ of $mathbb{Q}$.

Friday, 11 January 2013

ag.algebraic geometry - Family of Enriques surfaces and Grothendieck-Riemann-Roch

This is a somewhat technical remark, related to Andrea's answer, which is a bit too big to fit into the comment box.



If $f: Y rightarrow T$ has connected fibres, to conclude that
$R^0f_*mathcal O_Y = mathcal O_T$, one needs some assumptions beyond just that $f$ is a projective morphism of Noetherian schemes. (Consider these examples: a closed embedding will have connected fibres. To give such an example in which all fibres non-empty, consider a non-reduced $T$, and let $Y$ be the underlying reduced subscheme. Or one could take $T$ to be a cuspidal cubic curve and $Y$ to be its normalization.)



What the theorem on formal functions shows (assuming that $f$ is projective, and that $Y$ and $T$ are Noetherian, so we can apply the result as it is proved in Hartshorne) is that
for any point $P$ in $T$, the $mathfrak m_P$-adic completion $(R^0f_*mathcal O_Yhat{)}_P$
is equal to $H^0(hat{Y}_P,mathcal O)$, the global sections of the structure sheaf on the
formal fibre $hat{Y}_P$ over $P$.



So if $f$ has connected fibres, and hence connected formal fibres, so that
$H^0(hat{Y}_P,mathcal O)$ is a local ring, we see that $(R^0f_*mathcal O_Yhat{)}_P$ is a finite local $hat{mathcal O}_{T,P}$-algebra. In general, one can't do better than this.



But, if $f$ is flat with geometrially connected and reduced fibres (e.g $f$ is smooth with geometrically connected fibres), then base-change for flat maps (Hartshorne III.9.3) shows
that the fibre mod $mathfrak m_P$ of $R^0f_*mathcal O_Y$ is equal to
$H^0(Y_P,mathcal O_P)$ (the actual fibre over $P$, now, not the formal fibre),
which equals $k(P)$ (the residue field at $P$), since $Y_P$ is projective, geometrically reduced, and geometrically connected over $k(P)$.



So, maintaining these assumptions on $f$, we see that for each point $P$ of $T$,
the stalk $(R^0f_*mathcal O_Y)_P$
is a finite $mathcal O_{T,P}$-algebra with the property that its reduction modulo
$mathfrak m_P$ is isomorphic to
the residue field $k(P)$ of ${mathcal O}_{T,P}$. This implies (by Nakayama)
that the natural map ${mathcal O}_P rightarrow (R^0f_*mathcal O_Y)_P$
is surjective. This is true at every $P$, and so we see that
$mathcal O_T rightarrow R^0f_*mathcal O_Y$ is surjective.



Now one can combine this with the Grauert result to conclude (since a surjection
of invertible sheaves is necessarily an isomorphism) that the natural map
$mathcal O_T rightarrow R^0f_*mathcal O_Y$ is an isomorphism. (We probably don't
need to use the full force of Grauert here; for example, suppose that $T$ is connected;
a flat map is open, and a projective map is closed, so $f$ is surjective, hence faithfully
flat, and this implies that the map $mathcal O_T rightarrow R^0f_*mathcal O_Y$ is
injective, I think.)



Added: See Keerthi Madapusi's answer below for a correction to the above discussion
of flat base-change.

vector bundles - On finite endomorphisms of $mathbf{P}^r$

The claimed computation is still wrong. Let $m equiv r mod n$, with $0 leq r < n$. Then the right answer is that
$$pi_* mathcal{O}(m) = mathcal{O}( lfloor (m+1)/n rfloor-1)^{oplus(n-r-1)} oplus mathcal{O}( lceil (m+1)/n rceil-1)^{oplus(r+1)}.$$



Let $S$ be the source $mathbb{P}^1$ and $T$ the target. As a general piece of advice, you should never identify two spaces when you don't have to. Here are three ways you could get this answer:



By Grothendieck-Riemmann-Roch



Let $L_S$ be the line bundle $mathcal{O}(1)$ on $S$ and $L_T$ the line bundle $mathcal{O}(1)$ on $T$. Let $H_S$ and $H_T$ be the hyperplane classes in $H^*(S)$ and $H^*(T)$. The chern character of $L_S^{otimes m}$ is $(1+H_S)^m = 1+m H_S$. The Todd classes of $S$ and $T$ are $1+H_S$ and $1+H_T$. So $pi_* L_S^{otimes m}$ is something with chern character
$$(1+H_T)^{-1} pi_*left( (1+m H_S) (1+H_S) right) = (1+H_T)^{-1} pi_*left( 1+(m+1) H_S right).$$
Note that $pi_* 1 = n$ and $pi_* H_S = H_T$. So we get
$$ch(pi_* mathcal{O}(m)) = (1+H_T)^{-1} (n+(m+1) H_T)=n + (m-n+1) H_T.$$
(We know that $R^1 pi_*$ vanishes because the map is finite.)



From the leading term, we see that $pi_* mathcal{O}(m)$ has rank $n$. It is not completely obvious that is torsion free. If we assume it is, then it must be of the form $bigoplus mathcal{O}(a_i)$ for some $a_1 + cdots + a_n$. We see from the above computation that $sum a_i = m-n+1$.



That's as far as we can get from GRR. Grothendieck-Riemann-Roch can only do the computation in $K$-theory, so we can't distinguish $mathcal{O}(-1) oplus mathcal{O}(1)$ from $mathcal{O}(0) oplus mathcal{O}(0)$.



Directly in K-theory



The point of Grothendieck-Riemann-Roch is that it gives a commuting diagram
$$begin{matrix} K^0(S) & longrightarrow & H^*(S) \
downarrow & & downarrow \
K^0(T) & longrightarrow & H^*(T). end{matrix}$$



I usually find that it is just as easy to do my computations directly on the left hand side of the diagram. Let $p_S$ and $p_T$ be the class of the structure sheaf of a point on $S$ and $T$. We have the short exact sequence $0 to mathcal{O}(-1) to mathcal{O}(0) to mathcal{O}_{mathrm{pt}} to 0$, so $p_S = 1-L_S^{-1}$ and $L_S = 1+p_S$. (Since $p_S^2=0$.)



Clearly, $pi_* p_S = p_T$. So
$$pi_* mathcal{O}(m) = pi_* (1+p_S)^m = pi_* (1+m p_S) = pi_* 1 + m p_T.$$



We can see that $pi_* 1$ has rank $n$; say $pi_* 1 = n+a p_T$.
Let $chi$ be pushforward to the $K$-theory of a point, better known as holomorphic Euler characteristic.
Since pushforward is functorial, the sequence of maps $S to T to mathrm{pt}$ shows that $$chi(pi_* 1) = chi(1) = 1$$
so $n+a=1$ and $a = -(n-1)$. We see that
$$pi_* mathcal{O}(m) = n + (m-n+1) p_T.$$



By direct computation



It is easy enough to do this example, and any toric example like it, directly from the definition of pushforward. As a bonus, this will tell us exactly which vector bundle we get, not just the $K$-class.



Let $S_1 cup S_2$ be the open cover of $S$ where $S_1 = mathrm{Spec} k[s]$ and $S_2 = mathrm{Spec} k[s^{-1}]$. Similarly, define $T_1$, $T_2$, $k[t]$ and $k[t^{-1}]$.



Let $e$ be a generator for the free $k[s]$ module $mathcal{O}(m)(S_1)$. Then $s^m e$ is a generator of $mathcal{O}(m)(S_2)$. By definition, $left( pi_* mathcal{O}(m) right) (T_1)$ is $mathcal{O}(m)(S_1)$ considered as a $k[t]$-module. As such, it has basis
$$ e, s e, s^2 e, ldots s^{n-1} e. quad (*)$$
Similarly, $left( pi_* mathcal{O}(m) right) (T_2)$ has basis
$$ s^m e, s^{m-1} e, ldots, s^{m-n+1} e. quad (**)$$



Reorder the lists $(*)$ and $(**)$ so that corresponding elements have exponents of $s$ which are congruent modulo $n$. To keep notation simple, I'll do the case of $m=0$. So we pair off:



  • $e$ and $e$

  • $s e$ and $s^{-n+1} e = t^{-1} (s e)$

  • $s^2 e$ and $s^{-n+2} e = t^{-2} (s e)$
    and so forth.

There is one time that we pair $v$ with itself and $(n-1)$ times that we pair $v$ with $t^{-1} v$. So the transition matrix between our bases is diagonal with entries $(1,t^{-1}, t^{-1}, ldots, t^{-1})$ and the vector bundle is $mathcal{O}(0) oplus mathcal{O}(-1)^{n-1}$.



For general $m$, if I didn't make any errors, we get the formula at the beginning of the post.

star - Stellar mass database

Check out the on-line SIMBAD astronomical data base. You can even make separate lists based on criteria (like: list all of the stars between 10 and 20 parsecs (have to convert parallax to light years/ parsecs = 1/parallax (in arcsec/ 1 parsec = 3.262 LY) with class between G0 and G9 from 0 to +30 deg declination). It's pretty cool and shows numerous listings of distance values, brightness, metallicity, etc. complete with date of study and source. I just looked again and masses aren't listed (sorry). Masses aren't directly measured anyway (except in the case of binaries where we know the distance). Masses are essentially determined from the Hertzprung -Russell (mass/ luminosity) diagram and can be refined if we can determine age by looking at spectra. Back to Sol-Station or Wikipedia or I should say their references.

lo.logic - Exponent function as uninterpreted function in first order logic

The function $f(m,n)=m^n$ is primitive recursive, so expressible in
first-order arithmetic: there is a formula in three free variables
$F(m,n,p)$ over the language of first-order arithmetic
which is valid in Peano arithmetic for numerals $m$, $n$ and $p$ iff $p=m^n$.



Logic texts (e.g. Boolos and Jeffrey) will prove that primitive recursive
functions can be expressed in this way, but the general method does
not tend to provide nice formulas for concrete examples like this.

Wednesday, 9 January 2013

soft question - Which math paper maximizes the ratio (importance)/(length)?

Here is my list (in no specific order):



(*) A proof of Ehrenfeucht's conjecture about infinite systems of equations in free groups
and semigroups by Victor Guba:
V.S.Guba "Equivalence of infinite systems of equations in free groups and semigroups to
finite subsystems", Mathematical notes of the Academy of Sciences of the USSR, September 1986, Volume 40, 3, pp 688-690.



(*) A.A.Razborov, “Lower bounds on monotone complexity of the logical permanent”, Math.
Notes USSR, 37:6 (1985), 485–493.
As Laszlo Lovasz put it in his talk "The Work of A.A.Razborov" (can be easily found on the
Internet):
In an area where any step forward seemed almost hopeless (but which was at the
same time a central area of theoretical computer science) his results meant that deep
methods could be developed and to obtain strong lower bounds for algorithms was not
impossible.



(*) Isaac Newton "The mathematical principles of natural philosophy" - in this case the (finite) length of the work does not matter, since the importance is infinite :)

Tuesday, 8 January 2013

general relativity - Are astronomers researching or trying to find signs of worm holes?

This has been considered long ago (Here's a paper talking about this).



Wormholes are not forbidden by physics, but the creation of wormholes is iffy ground. THere are two possible paths one can take to create a wormhole:



  • Choose a pre-existing wormhole in the quantum foam and "expand" it by feeding it exotic matter.

  • "Tear and sew up" space — we're not sure if this is allowed by physics, as it ventures into the area of physics that we don't have an adequate explanation for.

Whatever it is, the creation and sustenance of a wormhole requires us to have control over exotic matter (in this case, particles/waves with negative mass/energy density). Vacuum fluctuations already have regions of negative energy density, but they're an uncontrollable quantum phenomenon by current technology.



"Faster than light travel" is a different matter, though. While wormholes let one jump to another point in space, one does not attain a speed greater than c whilst in them.

Monday, 7 January 2013

distances - Is the light we see from stars extremely old?

Yes, the speed of light in vacuum (or c) is 299,792,458 m/s and one light-year is the distance the light travels in one Julian year (365.25 days), which comes out as 9.4605284 × 1015 meters. Since c is the maximum speed at which all energy, matter, and information in the Universe can travel, it is the universal physical constant on which the light-year (ly) as one of the astronomical units of length is based.



That means that visible light as an electromagnetic radiation cannot travel faster than c and in one Julian year it can traverse a maximum distance of




d = t * c




d is distance in meters



t time in seconds



c the speed of light in vacuum in meters per second



If we calculate this distance for a 4.243 ly distant object, that comes out as 4.243 * 365.25 * 86,400 s * 299,792,458 m * sˉ¹ or exactly 40,141,879,395,160,334.4 meters (roughly 40 trillion kilometers or 25 trillion miles).



That is the distance the light traveled since it was last reflected of (or in our case emitted from, since Proxima Centauri is a red dwarf star) the surface of a celestial object to be 4.243 Julian years later visible at our observation point, in this case our planet Earth from where the distance to Proxima Centauri you quoted was measured.



The more powerful the telescope, the further into the past we can see because the light is much older! This goes the same regardless of the distance of the object you're observing, but astronomy is particularly neat in this regard and we can observe objects that are so distant that we see them from the time when they were still forming.



For further reading on other units used to measure far away objects you might be interested in reading this question on the parsec.

higher category theory - Is every left fibration of simplicial sets with nonempty fibers a trivial kan fibration?

In Lemma 2.1.3.4 of Higher Topos Theory, the statement of the lemma requires that the fibers are not only nonempty but contractible. However, in the proof, I don't see where contractibility is directly used, only the fact that the fibers are nonempty. There is one other place where contractibility is mentioned, "Since the boundary of this simplex maps entirely into the contractible kan complex $S_t$, it is possible to extend $f'$ to $X(n+1)$." However, I don't see how contractibility directly factors in, since that would only attest to the uniqueness of the extension. The existence of the extension comes from the fact that the inclusion $partial Delta^n times Delta^1 subseteq X(n+1)$ is left anodyne and $S_t$ is a nonempty Kan complex and the fact that the map f' factors through the inclusion of $S_t$.



Please correct me if I'm wrong. Also, there is a relevant post on meta where I first asked if this question is appropriate, and I was greenlighted by Anton.

Sunday, 6 January 2013

telescope - Why is the moon a fuzzy, white ball?

Your telescope is not focused (most likely), or is having some major collimation issues (less likely).



Try and move the eyepiece back and forth a little. Go through the whole range of the focuser. You must catch the primary focal plane in order for the image to become clear.



If that doesn't work, pull the eyepiece out a few mm and try to move through the whole range again.



If that doesn't work, try a different eyepiece.




The "advice" regarding the Moon filter is bogus. Be careful regarding what you hear on the Internet. The eye is certainly capable of adapting to various levels of brightness. Even with very large telescopes, the Moon filter is useless. All it does is reduce the brightness and contrast - the former is no big deal, but the latter is a major loss. You can never have enough contrast in a telescope.



When the Moon is visible, there's no point in going through deep dark adaptation anyway, because the "faint fuzzies" are obscured by Moon's glare. When I watch the Moon, I do it from a fully illumined backyard, or even on the street under the street lights. In fact, this is the best way to observe this object - no filters, but have some ambient light around you. Your eyes will function at optimal parameters.



Deep dark adaptation is only needed when observing faint nebulae and galaxies. But such objects can be observed in good conditions only when the Moon is below horizon.



Let me make it clear: a filter, any filter, will not fix the problem you're having, because it does not address the root cause.



Beware of most cases when the advice you receive is "use a filter". In 99% of cases, it's bogus. Many people own filters, but only a very small fraction know how to use them. In the vast, vast majority of cases you don't need any filters. Please steer clear of this superstition. It's a fad that vendors are more than happy to feed, because it makes them money.



There are legitimate ways to use filters (in the rare cases when they're justified), but that would be the subject of a different discussion - those rare cases are related to certain techniques of increasing the apparent contrast in the image. A neutral density filter (a.k.a. "moon filter") always decreases apparent contrast.



In this hobby, like in most other hobbies, a lot of people fixate on purchasing all sorts of accessories (in this case: filters), hoping to get extra performance, when all they need to do in reality is learn how to use the device / machine / etc (in this case: telescope) correctly. I see this tendency in all my other hobbies. It's unfortunate, and a huge money sink.



Save the money and use it instead towards purchasing a better telescope, or better eyepieces, or a good sky atlas, or an observing chair, etc.

spectroscopy - Why is $g$ tied to the oscillator strength $f$ in $log{gf}_{odot}$

You don't make it clear, but you may be confused about what $g$ is. It is the statistical weight of an atomic energy level.



The $gf$ value, which refers to a particular transition between two energy levels in an atom/ion, is used because there is symmetry in terms of emission/absorption processes once the statistical weight is taken into account.
$$g_1 f_{12} = - g_2 f_{21}$$



If you just quoted $f$, then you would also need to know what the appropriate statistical weight was in order to calculate transition probabilities.

Saturday, 5 January 2013

gr.group theory - The finite subgroups of SU(n)

This is really a comment on the top answer above, but since new users can't comment, I'll let someone else manually transfer the information to the right place.



There is a further mistake in the list of Fairbairn, Fulton and Klink (repeated in the list of Hanahy and He), which appears to be a misunderstanding of the classification by Blichfeldt et al. Two of the cases in that classification consists of semidirect products of abelian groups by $A_3$ and $S_3$. However, it is not specified which abelian groups can occur in this fashion!



Fairbairn, Fulton and Klink mistakenly assume that the abelian group in question has to be $mathbb{Z}/nmathbb{Z} times mathbb{Z}/n mathbb{Z}$ for some positive integer $n$, thus giving rise to the groups they denote $Delta(3n^2)$ and $Delta(6n^2)$. However, this is not the case.



Example 1: $A_3$ acts on the copy of $mathbb{Z}/7mathbb{Z}$ generated by the diagonal matrix with entries $e^{2pi i/7}, e^{4 pi i/7}, e^{8 pi i/7}$; this example occurs inside the exceptional subgroup of order 168. More generally, if $m,n$ are positive integers and $m^2+m+1 equiv 0 pmod{n}$, then $A_3$ acts on the copy of $mathbb{Z}/nmathbb{Z}$ generated by the diagonal matrix with entries $e^{2pi i/n}, e^{2m pi i/n}, e^{2m^2 pi i/n}$.



Example 2: $S_3$ acts on the copy of $mathbb{Z}/3mathbb{Z} times mathbb{Z}/9mathbb{Z}$ generated by the diagonal matrices with entries $e^{2pi i/9}, e^{2pi i/9}, e^{14 pi i/9}$
and $1, e^{2pi i/3}, e^{4pi i/3}$; this example occurs inside the exceptional subgroup of order 648.



I don't know a reference for the complete classification of the abelian groups that can occur inside the semidirect product. Yau and Yu don't say any more than Blichfeldt et al, though they do at least provide a helpful rewrite of the classification in modern language.

monoidal categories - Calculating 6j-symbols (aka Racah-Wigner coefficients) for quantum groups

We can also use partitions ($k$) (symmetric powers) instead of ($1^k$) on one or two of the edges. This still gives just scalars, but includes the full story for sl(2).



This problem seems to be equivalent to the problem of computing the exchange operator
in the tensor product of two (quantum) symmetric or exterior powers of the vector representation of the quantum sl(n), e.g. $S^kVotimes S^mV$, in the standard basis of the symmetric (resp., exterior) powers (about exchange operators see my paper with Varchenko arXiv:math/9801135 and my ICM talk arXiv:math/0207008). This can be computed if you know the fusion operator for these representations, which can be computed efficiently using the ABRR (Arnaudon-Buffenoir-Ragoucy-Roche) equation, see e.g. the appendix to arXiv:math/9801135. I am not sure if the answer is completely worked out anywhere, but there are at least some answers. For instance, see the paper arXiv:q-alg/9704005 where something is done even in the elliptic setting (which relates to elliptic 6j symbols of Frenkel-Turaev). What they do is compute the matrix elements for $m=1$, but the general $m$ can be obtained using the fusion procedure. This should be a really nice computation with a nice answer of the type you are expecting. In particular, in a special case you'll get coefficients of Macdonald's difference operators attached to symmetric powers. In the exterior powers case (or a product of a symmetric and an exterior power) the answer will be simpler, since $k$ cannot get larger than $n$ (in this case you should perhaps get a pure product, and it should be completely covered in the above paper).



EDIT: Remark. Let $V,W$ be representations of the quantum group with 1-dimensional zero weight spaces. Then a natural basis in ${rm Hom}(L_lambda, (Votimes L_mu)otimes W)$ is the compositions of intertwiners in one order, while a natural basis of ${rm Hom}(L_lambda, Votimes (L_muotimes W))$ is the composition of intertwiners in the other order. Thus, the 6j-symbol matrix (which is by definition the transition matrix between these two bases) is the exchange operator for intertwiners.

co.combinatorics - Maximum degree in maximal triangle free graphs

There is a sequence of Kneser graphs, generalizing the Petersen graph, that comprises a counterexample.



Let $k ge 1$ be an integer and let $G$ be a graph whose vertices are subsets of size $k$ of ${1,2,ldots,3k-1}$. Connect two vertices $A$ and $B$ by an edge if they are disjoint as subsets. Then $G$ has no triangles, because there isn't room for three disjoint subsets. On the other hand, if $A$ and $B$ are not connected by an edge, which is to say they are not disjoint, there is room for a third set $C$ which is disjoint from both of them. Thus if you add any edge $(A,B)$ to this graph $G$, it forms a triangle with $(A,C)$ and $(B,C)$.



Now let's count vertices and edges. The graph $G$ has $binom{3k-1}{k}$ vertices, and each vertex has $binom{2k-1}{k}$ edges. QED



(It is the Petersen graph when $k=2$.)




This is partly plagiarizing from David's insightful answer below, but I can't resist an addendum to his remarks. In the paper, The triangle-free process, Tom Bohman simplifies the construction of Kim that David cites. He makes a maximal triangle-free graph on $n$ vertices in using simplest plausible method: The random greedy algorithm. The result is a graph that is statistically very predictable. Its independence number is almost always $Theta(sqrt{n log n})$, and therefore so is its maximum valence. Its average valence is also in the same class. You can easily make graphs like this yourself with a simple computer program and see their properties. It's ironic, but a common theme, that a very simple random algorithm is more efficient than a highly symmetric construction such as the Kneser graphs.



As David explains, you get an immediate lower bound of maximum valence $Omega(n^{1/2})$ for any graph of diameter 2 or even radius 2. The Kneser graphs above have valence $O(n^alpha)$ with $alpha = log_{27/4}(4) approx 0.726$. So the Kim-Bohman result is much stronger, and that's why he pointed out Kim's paper.



In fact, this result closes a circle for me. A triangle-free graph on $n$ vertices is also a type of "packing design" in which each triple of vertices only has room for two edges. The original paper that introduced the random greedy algorithm in the topic of packing designs is by Gordon, Kuperberg, Patashnik, and Spencer. In that paper, we were looking at packing designs at the opposite end, for instance choosing triples of points with a random greedy algorithm such that no edge is contained in more than one triple. (The paper says covering designs, but at our end of the asymptotics, they are almost the same as packing designs.) It's the same highly predictable phenomenon at both ends. One of the ideas in this paper was to simplify a fancier construction using "Rödl nibbles" to the random greedy algorithm. Bohman is doing the same thing (but with much stronger asymptotics than in our paper), because Kim also uses Rödl nibbles.