Thursday, 28 February 2013

fields - Existence of maximal totally ramified extensions of an arbitrary CDVF

Let $K$ be a complete, discretely valued field with (let's say) perfect residue field $k$. We have a unique maximal unramified extension $K^{unr}$ of $K$ and a unique maximal tamely ramified extension $K^{tame}$ of $K$ and hence short exact sequences



$1 rightarrow Gal(K^{sep}/K^{unr}) rightarrow Gal(K^{sep}/K) rightarrow Gal(K^{unr}/K) rightarrow 1$



and



$1 rightarrow Gal(K^{tame}/K^{unr}) rightarrow Gal(K^{tame}/K) rightarrow Gal(K^{unr}/K) rightarrow 1$.



In the second case, the normal subgroup is abelian and I know exactly what the action of the quotient on it is: the tame cyclotomic character. Therefore if it splits, I know its structure as an explicit semidirect product.



In the most famous case, $k$ is finite, so $Gal(K^{unr}/K) = Gal(k^{sep}/k) cong widehat{mathbb{Z}}$ is a projective profinite group, so both sequences certainly split. This means that I (and lots of other people) do know the structure of the tame Galois group explicitly: it is $prod_{ell neq p} mathbb{Z}_{ell}(1) rtimes widehat{mathbb{Z}}$. Similarly the first sequence splits so there is a totally ramified extension $L/K$ such that $K^{sep}/L$ is unramified. Moreover, this is a very useful fact: it follows for instance that any abelian variety over $K$ with potentially good reduction acquires good reduction over a totally ramified base extension.



What is known in general? We have $Gal(K^{unr}/K) = Gal(k^{sep}/k)$, but if $k$ is almost anything else reasonable -- e.g. a local or global field, or the function field of a variety -- then its absolute Galois group certainly will not be projective. What is known about the splitting of these two short exact sequences in general, and especially about the class $eta in H^2(Gal(K^{unr}/K),Gal(K^{tame}/K^{unr}))$ defined by the second sequence? Is there information on how the analogues of the above results do / do not work out if these sequences do not split?

space time - Does matter accumulate just outside the event horizon of a black hole?

Yes, you are absolutely right, from OUR VIEWPOINT it does.



From Kip Thorne's book "Black Holes and Time Warps: Einstein's Outrageous Legacy."



“Like a rock dropped from a rooftop, the star’s surface falls downward (shrinks inward) slowly at first, then more and more rapidly. Had Newton’s laws of gravity been correct, this acceleration of the implosion would continue inexorably until the star, lacking any internal pressure, is crushed to a point at high speed. Not so according to Oppenheimer and Snyder’s relativistic formulas. Instead, as the star nears its critical circumference, its shrinkage slows to a crawl. The smaller the star gets, the more slowly it implodes, until it becomes frozen precisely at the critical circumference. No matter how long a time one waits, if one is at rest outside the star (that is, at rest in the static external reference frame) one will never be able to see the star implode through the critical circumference. That is the unequivocal message of Oppenheimer and Snyder’s formulas.”



“Is this freezing of the implosion caused by some unexpected, general relativistic force inside the star? No, not at all, Oppenheimer and Snyder realized. Rather, it is caused by gravitational time dilation (the slowing of the flow of time) near the critical circumference. Time on the imploding star’s surface, as seen by static external observers, must flow more and more slowly, when the star approaches the critical circumference, and correspondingly everything occurring on or inside the star including its implosion must appear to go into slow motion and then gradually freeze.”



“As peculiar as this might seem, even more peculiar was another prediction made by Oppenheimer and Snyder’s formulas: Although, as seen by static external observers, the implosion freezes at the critical circumference, it does not freeze at all as viewed by observers riding inward on the star’s surface. If the star weighs a few solar masses and begins about the size of the sun, then as observed from its own surface, it implodes to the critical circumference in about an hour’s time, and then keeps right on imploding past criticality and on in to smaller circumferences.”



“By looking at Oppenheimer and Snyder’s formulas from the viewpoint of an observer on the star’s surface, one can deduce the details of the implosion, even after the star sinks within its critical circumference; that is one can discover that the star gets crunched to infinite density and zero volume, and one can deduce the details of the spacetime curvature at the crunch.” P217-218



OK, so from our perspective all the matter will be clustered around the critical circumference and no further. That's fine, this shell in theory can exert all the forces required on the external universe such as gravitational attraction, magnetic field etc. The point like singularity which is in the indefinite future of the black hole, (from our point of view) indeed in the indefinite future of the universe itself could not exert such forces on this universe. This singularity is only "reached" as an observer rides in past the critical circumference and, through the process of time dilation, reaches the end of the universe.



This is obviously an area of active research and thinking. Some of the greatest minds on the planet are approaching this issue in different ways but so far have not reached a consensus but intriguingly a consensus appears to be beginning to emerge.



http://www.sciencealert.com/stephen-hawking-explains-how-our-existence-can-escape-a-black-hole



Stephen Hawking said at a conference in August 2015 that he believes that "information is stored not in the interior of the black hole as one might expect, but on its boundary, the event horizon." His comment refers to the resolution of the "information paradox," a long-running physics debate in which Hawking eventually concedes that the material that falls into a black hole isn't destroyed, but rather becomes part of the black hole.



Read more at: http://phys.org/news/2015-06-surface-black-hole-firewalland-nature.html#jCp



In the mid-90s, American and Dutch physicists Leonard Susskind and Gerard 't Hooft also addressed the information paradox by proposing that when something gets sucked into a black hole, its information leaves behind a kind of two-dimensional holographic imprint on the event horizon, which is a sort of ‘bubble’ that contains a black hole through which everything must pass.



What occurs at the event horizon of a black hole is very hard to understand. What is clear, and what proceeds from General Relativity, is that from the viewpoint of an external observer in this universe, any infalling matter cannot proceed past the critical circumference. Most scientists then change the viewpoint to explain how, from the viewpoint of an infalling observer, they will proceed in a very short period of time to meet the singularity at the centre of the black hole.
This has given rise to the notion that there is a singularity at the centre of every black hole.



However this is is an illusion, as the time it will take to reach the singularity is essentially infinite to us in the external universe.



The fact that the matter cannot proceed past the critical circumference is perhaps not an “illusion” but very real. The matter must from OUR VIEWPOINT become a “shell” surrounding the critical circumference. It will never fall through the circumference while we remain in this universe. So to talk of a singularity inside a black hole is incorrect. It has not happened yet.



The path through the event horizon does lead to a singularity in each case, but it is indefinitely far in the future in all cases. If we are in this universe, no singularity has yet been formed. If it has not been formed yet, where is the mass?  The mass is exerting pull on this universe, correct?  Then it must be IN this universe.  From our point of view it must be just this side of the event horizon.



ASTONISHINGLY IT MAY BE POSSIBLE TO PROVE THIS. The recent announcement of gravitational waves detected on the merger of 2 black holes was accompanied by an unverified but potentially matching gamma ray burst from the same area of the sky. This is inexplicable from the conventional viewpoint which holds that all the matter would be compressed into a singularity and would be incapable of coming out again.



If 2 black holes merge and emit gamma rays… the above is certainly an explanation which is also consistent with General Relativity. The mass never quite made it through the event horizon (from our viewpoint) and was perturbed by the huge violence of the merger, some escaping. It may be a deep gravitational well, but a very powerful gamma ray should just be able to escape given the right kick (attraction by an even larger black hole approaching).



Further more refined observations of similar events, which are likely to be reasonably frequent, may provide more evidence. There is not likely to be any other credible explanation.

mg.metric geometry - Convex n-polytope general position vectors to general position vectors of tetrahedron

I asked this question in a comment to this question, but got no response. I thought that perhaps it needed more exposure, so I made it a question in itself.



Define a set of general position vectors $v_i$ given in the following way:



Consider a convex $n$-polytope, $P_n$, with $f_i$ the set of $(n-1)$-polytopes "faces".



Associate the vector $v_i$ to the face $f_i$ such that $left|left|v_iright|right|=V_{f_i}^{n-1}$ the $(n-1)$-volume of $f_i$. Each vector $v_i$ has basepoint given by the center of mass of the associated face. $f_i$ and is normal to that face.




I have two questions:



  1. Is this notation strange? Are these notions defined elsewhere that I
    don't know about and perhaps I should
    reference?

  2. Now consider a triangulation of $P_n$ into $n$-tetrahedron, given by
    $T^j$. Each of these tetrahedron has a set of general position vectors $lbrace v^j_irbrace$. Can we determine these general position vectors from the initial general position vectors? In particular can we characterize triangulations of this type by relations on the general position vectors of the tetrahedron?



Note: In case you are wondering the motivation of this question, let me make it a little more clear. I was looking at this paper by John Baez, following the work of Barrett-Crane on the same subject, and in particular the construction of constraints for face-bivectors of tetrahedron on page 4 of the latter.

Wednesday, 27 February 2013

ag.algebraic geometry - Finding divisors on a curve

Hey Elijah, the answer to your question is quite simple, elementary and explicit! You don't need to read up on anything fancy. Here it goes:



Fact: The canonical divisor of a smooth affine hypersurface is zero! In particular the canonical divisor of your curve $f(x,y)$ is $0$ since as you have mentioned the curve has a smooth projectivisation (so the curve itself must be smooth).



Proof: Let $Xsubsetmathbb{A}^2$ be the affine curve defined by $f(x,y)$ which we are assuming to be smooth. Define the open sets $U_1,U_2$ in the plane by $frac{df}{dx}neq{0}$ and $frac{df}{dy}neq{0}$. Then $y$ and $x$ are local parameters in $U_1$ and $U_2$ respectively and the forms $dy$ and $dx$ are the basis of $Omega^{1}[U_1]$ over $k[U_1]$ (respectively $Omega^{1}[U_2]$ over $k[U_2]$). However, let us choose more convenient basis like $omega_1=-frac{dy}{df/dx}$ and $omega_2=frac{dx}{df/dy}$ on $U_1$ and $U_2$ respectively. This is permissible since the denominators don't vanish on the respective open sets. Now note that on $U_1cap{U_2}$ both the forms are equal since $frac{df}{dx}dx+frac{df}{dy}dy=0$, therefore they patch to give a form $omega$ that is regular and everywhere nonzero on $U$, so that $div omega=0$ in $U$. In other words the canonical divisor is zero.



Note: This works analogously for any smooth affine hypersurface.



P.S.: Quoting an exact sequence is not a substitute for making even one small and simple calculation. Hope this motivates you for more algebraic geometry!

the sun - What is the current estimation for how much time the Sun will function properly?

Just to add to Undo's answer, after the expansion to a red giant, the sun will become a planetary nebula, where (according to the link) the fusion reactions inside the star are 'overtaken' by the gravitational collapse, in turn causing the inner layers to condense and heat up causing the outer layers to be blown away. After which, the hot core forms the remnant, a White Dwarf star (NASA source), which is likely to last for several more billions of years.



The image below depicts the current ideas of the expected lifecycle and timeline of the life of the sun:



Solar lifecycle



Image source



How do we know what will happen to the sun?



Currently, the main method to determine the solar lifecycle is described by the article "The Future of the Sun" (Cowing, 2013) is to:




Studying stars with the same mass and composition as the Sun, the so-called "solar twins," can give us more information about our own Sun; solar twins of various ages offer snapshots of the Sun's evolution at different phases




Where the mass and chemical composition of a star provide the information needed to determine its lifecycle.

nt.number theory - Field of Definition of a Meromorphic Function

Question



Let X be a smooth, projective curve over the algebraic closure of ℚ. Let f:X->ℙ1 be a meromorphic function. Assume that the zeros and the poles are defined over some number field, K. Then does this imply that cf is defined over K, for some c?



If so, do we have to assume that the zeros and poles are individually defined over K, or would this work if they are collectively defined over K as well?



Clarification



By being collectively defined I mean that there's some K-model of X, XK, and a closed subscheme YK of XK, such that after base change YK becomes the ramification locus of f.



Thoughts



Let D:=(f). Obviously, H0(O(D),X) is 1 dimensional. DK:=(fK) will also be degree 0, so all that's left to show is that H0(O(DK),XK) is also nonzero. This smacks of some invariance of cohomology theorem. I couldn't quite find the right one to use.



If this line of argument works, this seems to imply that the ramification locus may be defined merely collectively.

Orbits in a binary star system

The point you appear to refer to, is called the Lagrangian point $L_1$.
This point is a saddle in the field of gravity, hence not to be considered to be stable in the strict sense.
Two other Lagrangian points, called $L_4$ and $L_5$, can be stable, provided the considered orbiting objects are of small mass in comparison to the two main bodies of the system, and if the masses of the binary components are sufficiently different.



According to theorem 4.1 of this paper, $L_4$ and $L_5$ are stable in all directions, if and only if the mass ratio of the two main binary components $frac{m_1}{m_2}geqfrac{25+3sqrt{69}}{2}approx 24.9599$.
According to theorem 3.1 of the same paper all Lagrangian points are stable in z-direction, which is the direction perpendicular to the orbital plane of the binary system.
(Credits for this corrected version go to user DylanSp.)

star - How would the pocket cellular clock work?

In a museum in Lviv, I saw a pocket cellular clock. I don't have a photo, but it was a small disc that had 2 or 3 filaments in it which were pointed at the stars (one of them was Andromeda, I think). Through analyzing the position of the stars it was possible to determine the time of the night. I'm not sure if knowing the day of the year was promised.



I couldn't find the information about the way the clock was used. Does anyone know how such device was used?

ag.algebraic geometry - What is the difference between K-theory and K-cohomology of a stack with coefficients in Vect

In B.Toen's thesis, he defines two concepts: K-theory and K-Cohomology with coefficients in Vect. What is the difference between them? At best, is there some simple example, like weighted projectve space PP(1,2), in which the difference can be seen directly?

Tuesday, 26 February 2013

star - How bright was Scholz when is passed near the Sun 70,000 years ago?

Currently, Scholz’s Star is a small, dim red dwarf in the
constellation of Monoceros, about 20 light-years away. However, at the
closest point in its flyby of the solar system, Scholz’s Star would
have been a 10th magnitude star — about 50 times fainter than can
normally be seen with the naked eye at night. It is magnetically
active, however, which can cause stars to “flare” and briefly become
thousands of times brighter. So it is possible that Scholz’s Star may
have been visible to the naked eye by our ancestors 70,000 years ago
for minutes or hours at a time during rare flaring events. The star is
part of a binary star system: a low-mass red dwarf star (with mass
about 8% that of the Sun) and a “brown dwarf” companion (with mass
about 6% that of the Sun). Brown dwarfs are considered “failed stars;”
their masses are too low to fuse hydrogen in their cores like a
“star,” but they are still much more massive than gas giant planets
like Jupiter.

Monday, 25 February 2013

gn.general topology - Sequential topological vector spaces

The space of all functions $mathbb Rtomathbb R$ with the topology of pointwise convergence is obviously sequential but not first countable.



Indeed, for every $xinmathbb R$, the set $U_x:={f:|f(x)|<1}$ is open. If the space had a countable base of neighborhoods of 0, every set of the form $U_x$ would contain an element of the base. So some element $U$ of the base would be contained in infinitely many sets $U_{x_1}, U_{x_2}, dots$ and hence in the intersection $V=bigcap U_{x_i}$. But 0 is not in the interior of $V$ because there is a sequence of functions outside $V$ that pointwise converges to 0. Namely the $n$th member of the sequence is the characteristic function of the set ${x_i:ige n}$.

Sunday, 24 February 2013

algebraic cycles - Geometry of the multilagrangian Grassmannian

This is a partial answer to your question: I believe there is exactly one class not coming from $G(3,6)$, and that this class will be represented by an algebraic cycle. However, I don't have a description of it.



My reasoning: Let's count points on your variety over a field with $q$ elements. I get
$$1+q+2q^2+3 q^3+ 4 q^4 + 3q^5 + 2 q^6 + q^7 + q^8.$$
(More details below.)
I have checked that your variety is smooth, so the Weil conjectures tell us that $H^4$ is four dimensional. Three of those dimensions come from $H^4(G(3,6))$, leaving one unexplained. Moreover, all the classes from $G(3,6)$ lie in $H^{2,2}$, so $H^{2,2}$ of your variety has dimension at least $3$. But then, by the symmetry of the Hodge diamond, the missing class is also in $H^{2,2}$. Assuming the Hodge conjecture, some multiple of it must be an algebraic class.



The point count: Write a point in your variety as the row span of a $3 times 6$ matrix $left( A B right)$. Your multi-Lagrangian condition is that $det A=det B$. This means either $A$ and $B$ both have rank $3$, or neither does. But your matrix is required to have full rank, so the ranks of $A$ and $B$ can be either $(3,3)$, $(2,2)$, $(2,1)$ or $(1,2)$. Remember also that we are counting not just $3 times 6$ matrices obeying these conditions, but orbits of such matrices under the left action of $GL_3$. I get



Rank $(3,3)$: $(q^3-1)(q^3-q)q^2$ points.
Rank $(2,2)$: $(q^2+q+1)(q^2+q+1)(q^3+q^2-q-1)$ points.
Rank $(2,1)$: $(q^2+q+1)(q^2+q+1)$ points.
Rank $(1,2)$: $(q^2+q+1)(q^2+q+1)$ points.




The following is the record of an incorrect solution, left as a warning to others. Let $A$ be the subvariety of all three planes which are of the form $u wedge v wedge w$ where $u$ and $v$ are in $mathrm{Span}(e_1, e_2, e_3)$ and $w$ is in $mathrm{Span}(e_4, e_5, e_6)$. Let $B$ be the similar subvariety where I switch the roles of $123$ and $456$.



It is easy to show that $[A]-[B]$, in $H^4(MG(3,6))$, is orthogonal to the classes obtained by restriction from $G(3,6)$. This would suggest that $[A]-[B]$ is our missing class. Unfortunately, it turns out that $[A]-[B]=0$. Proof: Let $S$, in $G(3,6)$, be the locus of those $3$-planes which meet $mathrm{Span}(e_1,e_2,e_3)$ in dimension $2$. Then $S cap MG(3,6)=A$, and the intersection is transverse. Let $T$ be those $3$-planes which similarly meet $mathrm{Span}(e_4,e_5,e_6)$; so $B=T cap MG(3,6)$. But $S$ and $T$ are homologous in $G(3,6)$, so $A$ and $B$ are homologous in $MG(3,6)$. Grrrr...

ct.category theory - Freyd cover of a category.

I don't know anything about it myself, but here are some other phrases you might try looking up.



The Freyd cover of a category is sometimes known as the Sierpinski cone, or "scone". It's also a special case of Artin gluing. Given a category $mathcal{T}$ and a functor $F: mathcal{T} to mathbf{Set}$, the Artin gluing of $F$ is the comma category $mathbf{Set}downarrow F$ whose objects are triples $(X, xi, U)$ where:



  • $X$ is a set

  • $T$ is an object of $mathcal{T}$

  • $xi$ is a function $X to F(U)$.

So the Freyd cover is the special case $F = mathcal{T}(1, -)$.



You can find more on Artin gluing in this important (and nice) paper:




Aurelio Carboni, Peter Johnstone, Connected limits, familial representability and Artin glueing, Mathematical Structures in Computer Science 5 (1995), 441--459




plus




Aurelio Carboni, Peter Johnstone, Corrigenda to 'Connected limits...', Mathematical Structures in Computer Science 14 (2004), 185--187.




(Incidentally, my Oxford English Dictionary tells me that the correct spelling is 'gluing', but some people, such as these authors, use 'glueing'. I'm sure Peter Johnstone has a reason.)

nt.number theory - Kummer generator for the Ribet extension

Let me first add that Herbrand wasn't the first to publish his result; it was obtained (but with a less clear exposition) by Pollaczek (Über die irregulären Kreiskörper der $ell$-ten und $ell^2$-ten Einheitswurzeln, Math. Z. 21 (1924), 1--38).



Next the claim that the class field is generated by a unit is true if $p$ does not divide the class number of the real subfield, that is, if Vandiver's conjecture holds for the prime $p$.



Proof. (Takagi)
Let $K = {mathbb Q}(zeta_p)$, and assume that the class number of
its maximal real subfield $K^+$ is not divisible by $p$. Then any
unramified cyclic extension $L/K$ of degree $p$ can be written in
the form $L = K(sqrt[p]{u})$ for some unit $u$ in $O_K^times$.



In fact, we have $L = K(sqrt[p]{alpha})$ for some element
$alpha in O_K$. By a result of Madden and Velez, $L/K^+$ is normal
(this can easily be seen directly). If it were abelian, the subextension
$F/K^+$ of degree $p$ inside $L/K^+$ would be an unramified cyclic
extension of $K^+$, which contradicts our assumption that its class
number $h^+$ is not divisible by $p$.



Thus $L/K^+$ is dihedral. Kummer theory demands that
$alpha /alpha' = beta^p$ for some $beta in K^+$, where
$alpha'$ denotes the complex conjugate of $alpha$.



Since $L/K$ is unramified, we must have $(alpha) = {mathfrak A}^p$.
Thus $(alpha alpha') = {mathfrak a}^p$, and since $p$ does not
divide $h^+$, we must have ${mathfrak a} = (gamma)$, hence
$alpha alpha' = ugamma^p$ for some real unit $u$.



Putting everything together we get $alpha^2 = u(betagamma)^p$,
which implies $L = K(sqrt[p]{u})$.



If $p$ divides the plus class number $h^+$, I cannot exclude the possibility that the Kummer generator is an element that is a $p$-th ideal power, and I cannot see how this should follow from Kummer theory, with or without Herbrand-Ribet.



If $p$ satisfies the Vandiver conjecture, the unit in question can be given explicitly, and was given explicitly already by Kummer for $p = 37$ and by Herbrand for general irregular primes satisfying Vandiver: let $g$ denote a primitive root modulo $p$, and let $sigma_a: zeta to zeta^a$. Then
$$ u = eta_nu = prod_{a=1}^{p-1} bigg(zeta^frac{1-g}{2}
frac{1-zeta^g}{1-zeta}bigg)^{a^nu sigma_a^{-1}}, $$
where $nu$ is determined by $p mid B_{p-nu}$.



Here is a survey on class field towers based on my (unpublished) thesis on the explicit construction of Hilbert class fields that I have not really updated for quite some time. Section 2.6 contains the answer to your question for primes satisfying Vandiver.

Friday, 22 February 2013

gravity - If an asteroid twice the size of Earth passed super close would half of the Earth be pulled towards it?

The Earth wouldn't simply split in half, it would be pulled towards the asteroid as a whole. To simplify matters, let's assume that the mass is equal to twice that of Earth (size alone is not enough, you must also consider density). You can then use Newton's law of gravitational attraction for the calculations:



$$F=G frac{m_1m_2}{r^2}$$



$G$ is a constant, $r$ is half the distance between the moon and Earth, $m_1$ is the mass of Earth, $m_2$ is twice that. The gravitational force would be about $1.3 times 10^{29} text{ N}$. In comparison, The gravitational attraction between Earth and the moon is about $2.0 times 10^{26} text{ N}$. Which means that the attraction Earth-meteorite will be 2000 times larger than Earth-Moon. The Earth will speed rapidly toward the asteroid and crash into it. During that flight Earth indeed is to be expected to have gravity fluctuations and probably lose major parts of its atmosphere. However it is unlikely that Earth, being a telluric planet (made of rock and metal) would subdue major deformation. After all, Jupiter's moons are round, even though they are in much more extreme conditions.



You would however need to consider the velocity of the meteorite for any more precise estimations. The speed and the trajectory count. An object that passes faster would tear the Earth out of its orbit but it would continue moving, whilst if you just place that meteorite between Earth and Moon, Earth (and Moon) would simply fall on a perpendicular trajectory.

lagrangian points - Aren't the mirrors of the James Webb Space Telescope too unprotected?

No. Not too unprotected, as you put it. There are several misconceptions that I find common about the JWST, that need to be addressed:



JWST primary mirror elements are not made of glass and do not shatter on impact



It's primary hexagon mirror elements are made out of Beryllium powder pressed into blocks, that were later cut in half to create two mirror blanks and had most of their back side cut away leaving reinforcing rib structures and thin and light front mirror surfaces. Those front surfaces were later precisely shaped to their specification at JWST operational temperature (-400°F, -240°C) and polished to a mirror finish. Beryllium was specifically chosen due to its lightness, strength and ability to maintain shape at mentioned cryogenic temperatures. Read more about this process on JWST - The Primary Mirror (includes images and videos).



Unlikely impacts with micrometeorites (see below) would result in bullet holes, not fractures and shattered mirror surfaces, and those punctures are something that can be established during mirror calibration mode and corrected for with image post-processing. No large mirror is free from slight defects throughout their lifetime, and this is something astronomers are well used to dealing with. The important thing is that the primary mirrors still maintain their shape, together with secondary mirror maintain focus, and most of their joint surface doesn't distort end image too much with any such imperfections. Smaller problem areas can be corrected for in software, or otherwise adjusted to in hardware.



JSWT target halo orbit around Sun-Earth Lagrange point 2 is not littered with debris



Sun-Earth Lagrange point 2, or SEL2 for short, the point around which JWST will be stationkeeping in a halo orbital regime (its own orbital plane around SEL2 is perpendicular to the Sun-Earth plane), is one of the least gravitationally attractive points in the near-Earth space, roughly 1.5 million km more distant to the Sun than the Earth, and with the same orbital period as Earth itself. Think of this point either as the top of a hill, or perhaps better yet (since it's not really a "hill" per se, that would require negative gravitational attraction, or anti-gravitational point), as a nearly flat surface with increasingly steep slope towards its parent massive bodies, in our case the Sun and the Earth. In another analogy, where is the safest place to stand if you're at risk of being swept by an avalanche? At the flat top of otherwise a steep hill, of course.



This is a lot different to orbiting around the Earth in LEO (Low Earth Orbit), where the International Space Station (ISS) is;



JWST is not defenseless against collisions with debris and micrometeorites



Still, even with all said, collision with objects in transit through the SEL2 region are not excluded, so there are a few defense mechanisms that JWST will have available to it and its mirrors;



  • The most obvious one is that it will deploy a large, multi-layered sunshield that will be kept pointing towards the Sun, as the name suggests. The aft of the craft will not merely be these deployable sail-like sunshields though, but will also host more robust structures, such as observatory's communications subsystem, navigation subsystem and engines, and so on. All of them should protect the mirrors from roughly half of the impact vectors, while they point in the other direction.


  • Halo orbit is, as mentioned, inclined roughly 90° to the Sun-Earth plane, which means that the JWST would only pass the plane at which most transient heliocentric dust would orbit at that orbital altitude two times during each of its large 800,000 km (500,000 mi) radius orbit, and only for relatively short duration. Any of these heliocentric interplanetary dust transiting SEL2 would also have small relatively velocity to the observatory. SEL2's radial velocity is roughly 30.08 km/s ($v_o approx {2 pi a over T}$), while orbital speed at Earth + 1.5 million km heliocentric altitude is roughly 29.64 km/s ($v_o approx sqrt{frac{GM}{r}}$). So we're in theory talking of impact velocity in the range of 440-445 m/s, or about 1.5 times the speed of sound at sea-level Earth. In terms of possible relative velocities of man-made debris in LEO, this is roughly 99.9% smaller kinetic potential ($E_text{k} =tfrac{1}{2} mv^2$) per same debris mass than in LEO with retrograde debris hitting prograde satellites at up to 15.4 km/s (e.g. ISS orbits at roughly 7.7 km/s, or 4.8 mi/s).


  • Larger Near Earth Objects (NEO) will of course continue to be tracked by Earth-based and in-orbit observatories, and needless to say JWST can detect even the darkest surface NEO on its own that are hard to detect in visual spectrum (albeit that won't be its mission, most new NEO detections are still by chance), since it'll be doing observations in the infrared range, and can detect these objects' radiated and reflected heat signature. In case a collision with a larger object would be predicted, and this object's trajectory well established, JWST could complete collision avoidance maneuvers with its own thrusters, if needs be.


And there are other risk management options available to JWST, including preparing conjunction analysis for potential collisions with other satellites stationkeeping in SEL2 or intersecting it (won't be many, but it'll still be done, no question about it), planning its stationkeeping and attitude maneuvers to avoid hazards and protect its most sensitive parts against debris, interplanetary dust, micrometeorites, solar and cosmic events, and other detected threats, when, where, and as needed. But in a general sense, SEL2 is a relatively safe place to be compared to near-Earth orbits, as far as orbital debris and micrometeorite impacts go, as per your question.

tag removed - Coloring Points in the Plane

This is the well-known unit distance graph problem. If we call $U=U(mathbb R^2)$ as the unit distance graph on the plane; that is, the vertices are the points on the plane and the edges are the pairs at distance one from each other.



It is well-known that $$4 leq chi(U(mathbb R^2))leq 7.$$



The lower bound is found by drawing a finite unit distance subgraph of $U$ which has chromatic number 4 while the upper bound is found by coloring the plane with 7 colors after dividing it into hexagons of a fixed,small diameter.



medskip



Recently, I came across the study of the chromatic index of a supergraph of $U$ called odd-distance graphs. Moreover, I think that this problem is equivalent to finding a measure of some sort but that is all I remember.



We once tried to use Hammel Basis of the plane to come up with a proof which at least improves the above bounds but it does not seem to work(well, we were able to prove that $mathbb Z$ is an integral domain... funny...).

fit sine wave to data

What I have done before is expand the equation to $y(t) = S sin(Omega t) + C cos(Omega t)$ and then calculate the following integrals:



$$ begin{align}
int_0^{frac{2pi}{Omega}} sin(Omega t) y(t),{rm d}t & = frac{pi S}{Omega} \
int_0^{frac{2pi}{Omega}} cos(Omega t) y(t),{rm d}t & = frac{pi C}{Omega}
end{align} $$



So if you need average of $n$ cycles you have



$$ begin{align}
S & = frac{Omega}{n pi} int_0^{frac{2pi,n}{Omega}} sin(Omega t) y(t),{rm d}t
\
C & = frac{Omega}{n pi} int_0^{frac{2pi,n}{Omega}} cos(Omega t) y(t),{rm d}t
end{align}$$



I have deployed a simple trapezoidal numerical integrator with very good results. Of course this depends on how many data points you have for each cycle. The more points then the more noise you have and the less you might get skewed results due to aliasing.



In the end your $A=sqrt{S^2+C^2}$ and $Phi = arctanleft( frac{C}{S} right) $




For the fun of it here some VBA code I used in Excel to do exactly this:



Const PI As Double = 3.14159265358979

'---------------------------------------------------------------------------------------
' Procedure : IntegrateSin
' Author : ja72
' Date : 7/22/2014
' Purpose : Perform a integral of SIN(x)*f(x) between x_1 and x_2
' To be used as a formula like: "=1/180*IntegrateSin(X$55:X$83,0,360)"
'---------------------------------------------------------------------------------------
Public Function IntegrateSin(ByRef r_f As Range, x_1 As Double, x_2 As Double) As Double
Dim i As Integer, N As Integer, sum As Double, h As Double, f() As Variant, x As Double
N = r_f.Rows.Count - 1
h = (x_2 - x_1) / N
' Convert range of cells into array f
f = r_f.Value: sum = 0#
' Transform f values to sin(x)*f
For i = 1 To N + 1
x = x_1 + h * (i - 1)
f(i, 1) = Sin(PI / 180# * x) * f(i, 1)
Next i
' Trapezoidal integration
sum = sum + h * f(1, 1) / 2
For i = 2 To N
sum = sum + h * f(i, 1)
Next i
sum = sum + h * f(N + 1, 1) / 2
IntegrateSin = sum
End Function

'---------------------------------------------------------------------------------------
' Procedure : IntegrateCos
' Author : ja72
' Date : 7/22/2014
' Purpose : Perform a integral of COS(x)*f(x) between x_1 and x_2
' To be used as a formula like: "=1/180*IntegrateCos(X$55:X$83,0,360)" '---------------------------------------------------------------------------------------
Public Function IntegrateCos(ByRef r_f As Range, x_1 As Double, x_2 As Double) As Double
Dim i As Integer, N As Integer, sum As Double, h As Double, f() As Variant, x As Double
N = r_f.Rows.Count - 1
h = (x_2 - x_1) / N
' Convert range of cells into array f
f = r_f.Value: sum = 0#
' Transform f values to cos(x)*f
For i = 1 To N + 1
x = x_1 + h * (i - 1)
f(i, 1) = Cos(PI / 180# * x) * f(i, 1)
Next i
' Trapezoidal integration
sum = sum + h * f(1, 1) / 2
For i = 2 To N
sum = sum + h * f(i, 1)
Next i
sum = sum + h * f(N + 1, 1) / 2
IntegrateCos = sum
End Function

Thursday, 21 February 2013

co.combinatorics - characterization of chordal graphs

Wow. Scary how I've failed to do this until I ask the question on MO, and then immediately get the idea.



Here's a proof, assuming the weaker version of the theorem is true:



Call such an ordering "good." Consider a minimal counterexample $M$ of size $n$ to our stronger claim: it must be chordal, has a good ordering, but does not have a good ordering starting at every vertex. However, every chordal graph of size less than $n$ has a good ordering starting at every vertex.



Since $M$ has some good ordering $T$, it has some vertex $v$ (the last one in $T$) such that all its neighbors form a clique. Remove this vertex to get $M'$. $M'$ is chordal (since chordality respects vertex deletion), so we can make a good ordering starting with any element in $M'$. Attach $v$ to the end. This ordering is obviously still good. Thus, we can make a good ordering starting with any vertex that is not $v$. So it suffices to make a good ordering starting with $v$.



To do this, consider the last vertex $w$ in $T setminus v$ with the property that $w$ is not connected to the vertex $w'$ immediately following it in $T$. If $w$ doesn't exist, this just means $M$ is a clique, so any ordering is good and we have a good ordering starting with $v$. If $w$ exists, then notice that $w$ cannot be connected with any of the vertices after it in $T$; if it is, then pick the earliest such one $u neq w'$, which is both connected to the vertex directly preceding it (not connected to $w$ by choice of $u$) and $w$, a contradiction on the ordering being good. Thus, $w$ is only connected with vertices preceding it. This means all of $w$'s neighbors form a clique. Hence, we can delete $w$, make a good ordering starting with $v$ of the rest, and append $w$ to the end. A winner is us.



-Yan

Wednesday, 20 February 2013

geometry - Uniqueness of a polygon

There are many structures that mathematicians study because of their intrinsic interest, simplicity and being quick starting. Examples that come to mind are plane polygons and graphs.



Plane polygons have a variety of properties that one can look at: area, perimeter, minimum number of vertex guards, number of reflex angles, number of right angles, etc. An individual graph can have a variety of "invariants" that can be studied: coloring number, clique number, being eulerian, or hamiltonian, etc. For two graphs there is no list of invariants that guarantees that the two graphs are isomorphic. The complexity of checking when two graphs are isomorphic is still a dynamic area to investigate. I do not know any list of properties that guarantee that two polygons are congruent going beyond specifying the lengths of the sides and the measure of the angles. Finding new combinatorial/geometric properties of polygons seems to continue to be very worthwhile.

ag.algebraic geometry - Rankin-Selberg convolutions of motivic L-series

If $L(s,M)$ is an irreducible degree 4 motivic L-function, and $L(s,mathrm{sym}^2(M))$ has a pole, then either $M=f_1 otimes f_2$ for a pair of distinct classical modular forms $f_1, f_2$, or $M=mathrm{Asai}(f)$ for $f$ cuspidal on $GL_2(K)$, with $K/mathbb{Q}$ quadratic. You can rule out the Asai case if $L(s,mathrm{sym}^2(M)otimes chi)$ is entire for any nontrivial quadratic character (in particular, entire for $chi$ the character of $K$). If by "practical" you mean "local", then I think the answer is no.

Monday, 18 February 2013

analytic number theory - Continuation up to zero of a Dirichlet series with bounded coefficients

The following paper seems to (among other things) give a detailed construction roughly along the lines of my comment above:




Bhowmik, Gautami, Schlage-Puchta, Jan-Christoph
Natural boundaries of Dirichlet series. (English summary)
Funct. Approx. Comment. Math. 37 (2007), part 1, 17--29.



In this paper, the authors prove some conditions for the existence of natural boundaries of Dirichlet series, and give applications to the determination of asymptotic results.



Let $n_nu$ be rational integers, assume the series $sumfrac{n_nu}{2^{epsilonnu}}$ converges absolutely for every $epsilon>0$, and let $mathcal{P}$ be the set of prime numbers $p$ such that $n_p>0$. Assume that the Riemann $zeta$-function has infinitely many zeros on the line $frac{1}{2}+it$, and suppose that $f$ is a function of the form $$ f(s)=prod_{nugeq1}zetaleft(muleft(s-frac{1}{2}right)+frac{1}{2}right)^{n_nu}. $$ Then $f$ is holomorphic in the half-plane $Re s>1$ and has a meromorphic continuation to the half-plane $Re s>frac{1}{2}$. If, for all $epsilon>0$, $ mathcal{P}((1+epsilon)x)-mathcal{P}(x)gg x^{frac{sqrt{5}-1}{2}}log^2x, $ then the line $Im s =frac{1}{2}$ is the natural boundary of $f$; more precisely, every point of this line is an accumulation point of zeros of $f$.



As an example on the existence of a natural boundary, $Omega$-results for Dirichlet series associated to counting functions are obtained. It is proved that if $D(s)=sumfrac{a(n)}{{n^s}}$ has a natural boundary at $Re s=sigma$, then there does not exist an explicit formula of the form $ A(s) := sum_{nleq x}a_n=sum_{rho}c_rho x^rho+O(x^sigma), $ where $rho$ is a zero of the Riemann zeta-function, and hence it is possible to obtain a term $Omega(x^{sigma-epsilon})$ in the asymptotic expression for $A(x)$.



Reviewed by Roma Kačinskaitė




In the above review, where $Im(s) = frac{1}{2}$ appears, I'm sure $Re(s) = frac{1}{2}$ is intended. Also the "assume" is a bit strange, since it is an old, famous theorem of G.H. Hardy that $zeta(s)$ has infinitely many zeros on the critical line.

untagged - How do astronomers choose their research topic?

Let me answer what I think you're asking (parts of your question aren't that clear to me).



Background: I am currently an astrophysics graduate student.



First, from you question it seems to me that you are proceeding under an assumption that you are accepted into a research group when you are accepted into a PhD program. Everywhere I looked as I applied for graduate school, though, this certainly was not the case. Not only was I not accepted into a specific research group at the schools I was accepted to but I was also encouraged to try out different research projects before settling on a specific topic to pursue for my dissertation.



That being said, I do think there are some programs which may choose to accept or not accept you based on what you are thinking you would like to do for research (if you know). This may be because of resources (for instances, if all the professors who are experts in what you want to do research in already have as many graduate students as they want/have money for, that may impact your acceptance and will impact your chances of doing the research you want should you be accepted).




Can you pick and choose?




A specific answer to your question is going to vary significantly from school to school. For example, some programs are very accommodating in letting students "pick and choose" what there want to do research in partly because there are currently resources available to allow that. As explained above, because of resources or other considerations some programs may not have the luxury of opening all doors for its graduate students.




And could you choose whether to do the physical theory or the observations or coding simulations?




Again, this is very school-specific. Some schools have immense computing resources and getting computer time to run simulations/heavy calculations is easy. Other schools may have no computing resources available (other than your desktop workstation) and it will be more difficult to get the computing time you'll need for simulations since you will have to look outside your university for resources. Some schools have access to a lot of time on some very big telescopes; some have access to telescopes that are specialized for specific topics in astrophysics; and other schools may have no access to telescopes at all, meaning you will have to find telescopes with allocation time you can apply for without having to belong to an institution affiliated with that telescope. It varies from school to school.



So, your ability to pursue specific research topics in astrophysics can be enhanced or hampered by the computing and telescope resources available at your department. Take that into consideration as you apply to schools.



SUMMARY



The answers to your questions vary greatly from department to department and are largely dependent on the resources available to a specific department. I have found generally that there is great flexibility in astrophysics programs to choose a PhD dissertation topic but that certain research topics have more support and resources at some institutions than at others. Also, at many institutions professors are limited in how many students they can advise by the amount of funding they have for PhD students, which can significantly affect the research options available to you. There are, however, some institutions that have the resources to cover for an under-funded professor if a student does want to work for them, but again it just boils down to it varies from school to school.

co.combinatorics - Maximum size of antichain if no m subsets have a common intersection of size n

I have a lemma about antichains that I think should be already known, but I can't find it anywhere. I am looking for a reference to this result that I can use in my paper, so that I don't have to include the proof.



Let $mathcal{F}$ be an antichain on finite universe $U$, such that there are no $m$ distinct subsets $S_1, S_2, ldots, S_m in mathcal{F}$ such that $|S_1 cap S_2 ldots cap S_m| geq n$. Then $|mathcal{F}| leq 2m|U|^n$.



(The bound can actually be made a little bit sharper, but this is sufficient for my purposes.)



Here is a proof of why this claim is true. We count the number of sets in $mathcal{F}$ in two steps.



1) Since $mathcal{F}$ is an antichain, all sets in it are distinct. Hence there are less than $|U|^k$ sets in $F$ of size $k$. Hence the number of sets in $mathcal{F}$ of size at most $n-1$ is less than $sum _{k=0}^{n-1} |U|^k = (U^n - 1)/(U - 1) leq U^n$.



2) Now we count the number of sets with at least $n$ elements. For each set in $mathcal{F}$ with at least $n$ elements, select $n$ of its elements arbitrarily and group the sets according to the chosen $n$ elements. If some group contains at least $m$ subsets of $mathcal{F}$, then the $n$ elements that define the group are in the common intersection of these $m$ subsets, and this violates our assumption. Hence every group has less than $m$ subsets in it. Since there are less than $|U|^n$ different groups, and each group has less than $m$ subsets in it, there are less than $m|U|^n$ subsets in $mathcal{F}$ of size at least $n$.



Adding the numbers from (1) and (2) yields that the total number of subsets in $mathcal{F}$ is bounded by $2m|U|^n$.



Can anyone tell me if this is already known and under which name? A reference would be greatly appreciated. Thanks!

Saturday, 16 February 2013

pr.probability - What is convolution intuitively?

Here is my take on this. $newcommand{bZ}{mathbb{Z}}$ $newcommand{bR}{mathbb{R}}$ Discretize the real axis and thing of it as the collection of point $Lambda_hbar:=hbar bZ$, where $hbar>0$ is a small number. Then a function $f:Lambda_hbarto bR$ is determined by its generating function, i.e., the formal power series $newcommand{ii}{boldsymbol{i}}$



$$G^hbar_f(t)=sum_{ninbZ}f(nhbar)t^n. $$



Then



$$G^hbar_{f_0ast f_1}(t)= G^hbar_{f_0}(t)cdot G^hbar_{f_1}(t).tag{1} $$



Observe that if we set $t=e^{-iixi hbar}$, then



$$G^hbar_f(t)=sum_{xinLambda_hbar} f(x) e^{-ii xi x}. $$



Moreover



$$ hbar G^hbar_f(e^{-iixi hbar})=sum _{nin bZ} hbar f(nhbar) e^{-iixi(nhbar)}, tag{2}$$



and the expression in the right hand sum is a "Riemann sum" approximating



$$int_{bR} f(x)^{-iixi x} dx. $$



Above we recognize the Fourier transform of $f$. If we let $hbarto 0$ in (ref{2}) and we use (ref{1}) we obtain the wellknown fact that the Fourier transform maps the convolution to the usual pointwise product of functions. (The fact that this rather careless passing to the limit can be rigorous is what the Poisson formula is all about.)



The above argument shows that we can regard $hbar G_f^hbar(1)$ as an approximation for $int_{bR} f(x) dx$.



Denote by $delta(x)$ the Delta function 9concentrated at $0$. The Delta function concentrated at $x_0$ is then $delta(x-x_0)$. What could be the generating function of $delta(x)$, $Gdelta^hbar$? First, we know that $delta(x)=0$, $forall xneq 0$ so that



$$G_delta^hbar(t) =ct^0=c. $$



The constant $c$ can be determined from the equality



$$ 1= int_{bR} delta(x) dx=hbar G_delta^hbar(1)=hbar c$$



Hence $hbar G_delta^hbar(1)=1$. Similarly



$$ G^hbar_{delta(cdot-nhbar)} =frac{1}{hbar} t^n. $$



Putting together all of the above we obtain an equivalemt description for the generating functon af a function $f:Lambda_hbartobR$. More precisely



$$ G_f(t)=hbarsum_{lambdainLambda_hbar}f(lambda) G_{delta(cdot-lambda)}(t). $$



The last equality suggests an interpretation for the generating function as an algebraic encoding of the fact that $f:Lambda_hbartobR$ is a superposition of $delta$ functions concentrated along the points of the lattice $Lambda_hbar$.

Binary star not exceeding Roche Lobe?

Of course this is possible. Whether it will fill its Roche lobe depends on the masses and radii of the stars and the distance between the stars. If the distance is big enough then the second (less massive) star will not fill its Roche lobe and continue evolving as a single star would. The (large) majority of binaries will fall in this category, only close binaries will evolve into cataclysmic variables.



For instance the Alpha Centauri/Proxima Centauri system will not evolve into a system where mass transfer takes place via the Roche lobes as the distance between Alpha Cen A/B and Proxima Centauri is much too big.



Even when the stars are very close, a supernova might not occur if the masses of the stars are low. The supernova occurs when the mass of the existing white dwarf increases (because of the accretion) to a mass surpassing the Chandrasekhar limit (1.4 solar masses). If the transferred mass is too low then no supernova will happen.

Friday, 15 February 2013

What is a noncommutative fiber bundle?

One can consider torsors (principal bundles) within any sufficiently nice category and with respect to descent with respect to given Grothendieck topology and given fibered category over the ground site. See for example



  • Tomasz Brzeziński, On synthetic interpretation of quantum principal bundles, AJSE D - Mathematics 35(1D): 13-27, 2010 arxiv:0912.0213.

where in the main motivational part the codomain fibration and regular epimorphism topology are implicitly used. For the fibered category of modules (viewed as quasicoherent sheaves) over noncommutative rings, the faithfully flat Hopf-Galois extensions are the answer provided we accept Hopf coactions as dual representations of group actions. The tensor product is not a monoidal product in the category of associative algebras so this is a bit of a problem. Next thing is that one needs to consider nonaffine objects, if one is in algebraic framework, what can allow for more general concept of noncommutative principal bundles over the covers by noncommutative localizations which are analogues of covers in a Grothendieck topology. This kind of noncommutative principal bundles were skecthed in my articles



  • Z. Škoda, Localizations for construction of quantum coset spaces, math.QA/0301090, Banach Center Publ. 61, pp. 265--298, Warszawa 2003;


  • Z. Škoda, Coherent states for Hopf algebras, Letters in Mathematical Physics 81, N.1, pp. 1-17, July 2007. (earlier arXiv version: math.QA/0303357),


based on general picture of actions in noncommutative algebraic geometry as explained in the newer survey



  • Z. Škoda, Some equivariant constructions in noncommutative algebraic geometry, Georgian Mathematical Journal 16 (2009), No. 1, 183--202, arXiv:0811.4770.

See also the nlab:noncommutative principal bundle. I will hopefully release 2 more articles in this direction within next month or two.



Yes, for the spaces of sections of associated bundles with structure Hopf algebra one uses the cotensor product construction in the affine case; this is well known in the literature; the recipe can also be globalized by gluing along localizations. However this does not give the total spaces of associated bundles (in the category of noncommutative spaces) in satisfactory way in general, but only the spaces of sections.

lo.logic - Isomorphism types or structure theory for nonstandard analysis

Under a not unreasonable assumption about cardinal arithmetic, namely $2^{<c}=c$ (which follows from the continuum hypothesis, or Martin's Axiom, or the cardinal characteristic equation t=c), the number of non-isomorphic possibilities for *R of cardinality c is exactly 2^c. To see this, the first step is to deduce, from $2^{<c} = c$, that there is a family X of 2^c functions from R to R such that any two of them agree at strictly fewer than c places. (Proof: Consider the complete binary tree of height (the initial ordinal of cardinality) c. By assumption, it has only c nodes, so label the nodes by real numbers in a one-to-one fashion. Then each of the 2^c paths through the tree determines a function f:c to R, and any two of these functions agree only at those ordinals $alphain c$ below the level where the associated paths branch apart. Compose with your favorite bijection Rto c and you get the claimed maps g:R to R.) Now consider any non-standard model *R of R (where, as in the question, R is viewed as a structure with all possible functions and predicates) of cardinality c, and consider any element z in *R. If we apply to z all the functions *g for g in X, we get what appear to be 2^c elements of *R. But *R was assumed to have cardinality only c, so lots of these elements must coincide. That is, we have some (in fact many) g and g' in X such that *g(z) = *g'(z). We arranged X so that, in R, g and g' agree only on a set A of size $<c$, and now we have (by elementarity) that z is in *A. It follows that the 1-type realized by z, i.e., the set of all subsets B of R such that z is in *B, is completely determined by the following information: A and the collection of subsets B of A such that z is in *B. The number of possibilities for A is $c^{<c} = 2^{<c} = c$ by our cardinal arithmetic assumption, and for each A there are only c possibilities for B and therefore only 2^c possibilities for the type of z. The same goes for the n-types realized by n-tuples of elements of *R; there are only 2^c n-types for any finite n. (Proof for n-types: Either repeat the preceding argument for n-tuples, or use that the structures have pairing functions so you can reduce n-types to 1-types.) Finally, since any *R of size c is isomorphic to one with universe c, its isomorphism type is determined if we know, for each finite tuple (of which there are c), the type that it realizes (of which there are 2^c), so the number of non-isomorphic models is at most (2^c)^c = 2^c.



To get from "at most" to "exactly" it suffices to observe that (1) every non-principal ultrafilter U on the set N of natural numbers produces a *R of the desired sort as an ultrapower, (2) that two such ultrapowers are isomorphic if and only if the ultrafilters producing them are isomorphic (via a permutation of N), and (3) that there are 2^c non-isomorphic ultrafilters on N.



If we drop the assumption that $2^{<c}=c$, then I don't have a complete answer, but here's some partial information. Let kappa be the first cardinal with 2^kappa > c; so we're now considering the situation where kappa < c. For each element z of any *R as above, let m(z) be the smallest cardinal of any set A of reals with z in *A. The argument above generalizes to show that m(z) is never kappa and that if m(z) is always < kappa then we get the same number 2^c of possibilities for *R as above. The difficulty is that m(z) might now be strictly larger than kappa. In this case, the 1-type realized by z would amount to an ultrafilter U on m(z) > kappa such that its image, under any map m(z) to kappa, concentrates on a set of size < kappa. Furthermore, U could not be regular (i.e., (omega,m(z))-regular in the sense defined by Keisler long ago). It is (I believe) known that either of these properties of U implies the existence of inner models with large cardinals (but I don't remember how large). If all this is right, then it would not be possible to prove the consistency, relative to only ZFC, of the existence of more than 2^c non-isomorphic *R's.



Finally, Joel asked about a structure theory for such *R's. Quite generally, without constraining the cardinality of *R to be only c, one can describe such models as direct limits of ultrapowers of R with respect to ultrafilters on R. The embeddings involved in such a direct system are the elementary embeddings given by Rudin-Keisler order relations between the ultrafilters. (For the large cardinal folks here: This is just like what happens in the "ultrapowers" with respect to extenders, except that here we don't have any well-foundedness.) And this last paragraph has nothing particularly to do with R; the analog holds for elementary extensions of any structure of the form (S, all predicates and functions on S) for any set S.

Thursday, 14 February 2013

homological algebra - Matrix factorization categories beyond the isolated singularity case

In his really nice thesis, Tobias Dyckerhoff proved the following theorems about matrix factorizations(of possibly infinite rank) over a regular local k-algebra R with a function w and residue field k such that the Tyurina algebra, T= $R/(w,dw)$ is finite dimensional. This last condition says that w has an isolated singularity. For further reference, let S denote the ring R/(w).



1) The homotopy category of matrix factorizations has a compact generator as a triangulated category, which he denotes as $k^{stab}$.



2) As a consequence of 1), he derives that there is a natural complex which represents the identity functor thought of as an element of MF($Rotimes R,1otimes w-wotimes 1)$ which he denotes as the stabilization of the diagonal $R^{stab}$.



3) MF(R,w) is a Calabi Yau dg-category.



Now my question is how much of the above remains true for when the singularity is non-isolated? In some writings, Kontsevich, while not explicitly saying so, writes as if the homotopy category always has a compact generator and that the category is there by "dg-affine", e.g. equivalent to D(A), the derived category of modules over a dg-algebra. Is this indeed known to be true or false? If not, is there a way to prove 2) without making reference to 1)? I'm asking because I haven't found anything about this stuff in the literature, but a lot of things in this field are not written or written in physics literature that I'm not familiar with.



I've checked a few examples with non-isolated singularities and it appears that for example in the category of factorizations $(k[[x,y]], xy^2)$, that while $k^{stab}$ doesn't generate as Dyckerhoff proves, one has $(koplus k[[x]])^{stab}$, which I think does generate. The way I want to argue this is Dyckerhoff's theorem 3.6 that it is enough to show that $Tor_S(koplus k[[x]], M)$ implies that $Tor_S(N,M)=0$, where N is a finitely generated T module and M is any S module.Then one does an analysis of finitely generated modules over T(I didn't think about the characteristic 2 case) and does some devissage with short exact sequences. Please let me know if this sounds off. I also think that with a bit more calculation one can prove similarly that in (k[[x,y,z]], xyz) the module $(k[[x]]oplus k[[y]]oplus k[[z]]oplus k)^{stab}$ is a compact generator.



Added: I think the right generalization of the above two examples is the following, in http://websupport1.citytech.cuny.edu/faculty/hschoutens/pdf/finiteprojdim.pdf, the author introduces the notion of a "net". The above method should give a compact generator, whenever the net of finitely generated modules over T is generated as a net by finitely many modules. This happens for example when T has finitely prime ideals. The modules A/p, where p is a prime, generate the net of finitely generated projective modules over T, which is enough to prove the vanishing above. In particular, this should take care of the case when T has dimension 1. A question is what are some conditions on (R,w) which lead to the net of finitely generated modules over T being generated by finitely many objects?



Assuming that this right, I think that to derive 2 and 3 for these examples becomes a formality in view of Dyckerhoff's section 5. One just replaces his compact generator with the new one.

riemannian geometry - Linear/Non-linear sigma model

I don't know anything about the QFT side, so I'll refrain from saying things about it.



For the mathematics, one of the reasons that there aren't that many expository/introductory references for it maybe because the development of the (non-linear) theory is rather incomplete. (The linear theory is sort-of trivial: it boils down to decoupled linear wave equations.) The simplest version of the non-linear sigma model is the harmonic map/wave map systems (the former is Riemannian/elliptic, the latter is Lorentzian/hyperbolic).



Perhaps I should say a few words here to establish notation. Here sigma model generally means a Lagrangian theory of maps for $phi: Mto N$, where $M$, endowed with a pseudo-Riemannian metric $g$, is called the source manifold, and $N$ the target. The Lagrangian density is given by $mathcal{L} = L dvol_g$, where in index notation $L = g^{ij}k_{AB}partial_iphi^Apartial_jphi^B$ where $k_{AB}$ is some symmetric tensor depending, possibly, on the map $phi$ and its first jet.



Then the linear sigma model can be interpreted as when $N$ is some finite dimensional vector space and $k$ an inner product on $N$.



For the harmonic/wave map systems, $N$ is endowed with a Riemannian metric $h$, and $k_{AB}$ is set to be equal to the metric $h$. So we can also view $L$ to be the $g$-trace of the pull-back metric $phi^*h$.



A lot of words have been written about harmonic maps. For an introduction, Jost's book Riemannian Geometry and Geometric Analysis has a good section on it. The notes of Helein Harmonic Maps, Conservation Laws, and Moving Frames is also quite nice. Schoen and Yau's Lectures on Harmonic Maps, as well as Eells and Lemaire's book Selected Topics in Harmonics Maps, are both very good.



One instance where the Riemannian harmonic maps have come into play is the study of stationary axisymmetric solutions to Einstein's equations in general relativity. I refer you to the works of Gilbert Weinstein or to Luc Nguyen's PhD Thesis at Rutgers University.



For the Lorentzian version, the question is much more open. In the general case, local well-posedness follows from general theory (the system of PDEs forms a semilinear hyperbolic system of equations). As far as I know, all further work (blow-up or global existence for various target manifolds) have been done only with the source manifold being the Minkowski space. A reasonably complete set of references can be found at the Dispersive Wiki. Some of the notable news recently are the blow-up results of Rodnianski-Sterbenz and Raphael-Rodnianski, and the global wellposedness results for negatively curved targets due to Tao and Krieger-Schlag (you can find all these on the arXiv).



Now, the harmonic/wave map systems can be described as the simplest of a family of nonlinear sigma models for maps between pseudo-Riemannian manifolds. Assume now $(M,g)$ and $(N,h)$ are the source and target manifolds, and $phi: Mto N$ some map. We shall write $D^phi$ for the (1,1)-tensor field given by $g^{-1}circ phi^*h$. $D^phi$ induces at every point a linear map from $T_pM$ to itself. Note that the harmonic/wave-map Lagrangian is characterized by $L = mathop{tr} D^phi$, or the first invariant $lambda_1$ of the matrix $D^phi$. Naturally one asks whether Lagrangian field theories with the Lagrangian being (linear combinations of) other invariants $lambda_k$ are interesting.



There are two special cases which I know that are actively studied. The case where $L = lambda_1 + lambda_2$ is known as the Skyrme model (there are, again, a Riemannian and a Lorentzian version depending on the signature in the source manifold). You can read a lot more about it in Manton and Sutcliffe's book Topological Solitons (for the traditional case where the target manifold is $SU(2)$ with the bi-invariant metric and the source manifold is either Minkowski space or $mathbb{R}^3$). This model originally arose in nucleon physics. One interesting fact is that the harmonic map system from $mathbb{R}^3to mathbb{S}^3$ does not admit finite energy solutions; but it looks like the Riemannian Skyrme model might (it is still an open problem). You may want to look up the works of Lev Kapitanski if you are interested in theoretical work in this direction. For the Lorentzian version not much is known (it is one of the things I am working on; a pre-print from Jared Speck and me should be available after the job application season).



The other special case which has been studied is the case where $L = (det D^phi)^p$ ... roughly speaking. (In the case where $M$ and $N$ are both Riemannian and have the same dimensions, this is correct; in the case where $M$ is Lorentzian and has one more dimension than $N$, the determinant should be thought of as being restricted to space-like slices [else it vanishes identically].) This is the case of non-linear elasticity and fluid dynamics (though this is not how most books in elasticity and fluids formulate their theory, the various formulations are roughly equivalent). I'm not sure who the active participants are for this model, but I vaguely remember Lars Andersson having some interest in it. (For a bit more about the mathematical set-up of this model, a good reference is, if I recall correctly, Demetrios Christodoulou's 1998 AIHP paper "On the geometry and dynamics of crystalline continua" as well as his book Action Principle and Partial Differential Equations.)

Wednesday, 13 February 2013

co.combinatorics - Battleship Permutations

Consider the following simpler problem: how many ways are there to place a length-m ship and a length-n ship on a r-by-s grid? I'll assume m and n are distinct. Also, I'll say the grid has r rows and s columns.



The number of ways to place the length-m ship is $(r-m+1)s + (s-m+1)r$. There are $(r-m+1)s$ ways to place the ship vertically (s columns, each having r-m+1 possible placements) and $(s-m+1)r$ ways to place the ship horizontally.



Similarly, the number of ways to place the length-n ship is $(r-n+1)s + (s-n+1)r$.



The total number of ways to place the two ships should be the product of these, except we have to consider the possiblity that the ships could intersect. If the two ships intersect, either:
- one is horizontal and one is vertical, and the intersection is a single square, or
- both have the same orientation, and the two ships lie in the same row or column.
In either case I suspect that the number of configurations with that intersection structure is a polynomial in m, n, r, s.



I conjecture, therefore, that the answer to this problem is a polynomial in m, n, r, s. Let m and n be constants and let r, s vary; then the leading term is $4 r^2 s^2$. Essentially there are $2rs$ ways to place each ship.



Similarly, I suspect that with the actual Battleship fleet on a grid of arbitrary size (ships of length 2, 3, 3, 4, 5 on an n-by-n grid) the number of ways to place the ships is a polynomial with leading term $32n^{10}$, and the actual number of possible placements is much less than this. In particular, the number of ways to place a length-k ship on a 10-by-10 grid is $20(11-k)$; thus the number of placements in actual Battleship is somewhat less than $20^5 times 9 times 8 times 8 times 7 times 6 = 77414400000$. If I actually wanted a good approximation of this number, I would place ships at random on a Battleship grid and see with what frequency they don't intersect.

How small can a planet be and still have Earth-like gravity?

The surface gravity of a planet is very close to
$$g=frac{4pi G}{3}rho r.$$
With $g$ to be kept constant, and $frac{4pi G}{3}$ a constant, we need
$rho_Pr_P=rho_Er_E$, or
$$r_P=frac{rho_E}{rho_P}r_E,$$
with $rho_E=5.515 mbox{ g}/mbox{cm}^3$ the mean density of Earth, $r_E=6371.0 mbox{ km}$ the mean radius of Earth, $rho_P=22.59mbox{ g}/mbox{cm}^3$ the density of densest natural element osmium, and $r_P$ the radius of the fictive osmium planet.



Hence $$r_P=frac{5.515}{22.59}r_E=0.2441~r_E=1555mbox{ km}.$$



Some compression of the core of an osmium planet due to pressure is neglected.

Tuesday, 12 February 2013

ct.category theory - Computing the image of the unit map of the join/overcategory adjunction for simplicial sets

Definitions:



Recall the definition of the join of two simplicial sets. We may regard the functor $-star Y$ as a functor $i_{Y,-star Y}:Set_Deltato (Ydownarrow Set_Delta)$ by replacing the resulting simplicial set $Xstar Y$ with the canonical map $i_{Y,Xstar Y}: Ycong emptysetstar Y to Xstar Y$. It is not hard to show that $i_{Y,-star Y}$ commutes with colimits in $(Ydownarrow Set_Delta)$, and therefore that it admits a right adjoint. We call the adjoint functor the overcategory or over-simplicial-set functor. We can give an explicit description of this simplicial set: Given an arrow $f:Yto S$, define $(Sdownarrow f)_n:=Hom_{(Ydownarrow Set_Delta)}(i_{Y,Delta^nstar Y},f)$.



As with all adjunctions, we have a unit and counit natural transformation $eta_X:Xto (Xstar Ydownarrow i_{Y,Xstar Y})$, and $epsilon_f: i_{Y,(Xdownarrow f)star Y}to f$ respectively.



Then the question: Intuitively, the unit map should map $X$ to the simplicial subset of $(Xstar Y downarrow i_{Y,Xstar Y})$ spanned by the "original" vertices of $X$, and I have seen this applied before (specifically in the case where $X$ is a simplex). However, I can't seem to figure out why this should be true formally. Is it true, and if so, how can we prove it?



Edit: As Tim mentions, the augmentation is given by the empty presheaf, that is, $Delta^{-1}:=emptyset$, which gives $X(-1):=Hom(Delta^{-1}, X)=Hom(emptyset,X)={ast}$

orbit - How can we avoid needing a leap year/second?

We not only can avoid leap seconds, that's how it used to work in fact. And there is a common newer system which avoids leap seconds as well.



Before 1960, seconds were defined as 1/86400 of a mean solar day. Then when variations in the earth's rotation caused it to get out of sync, a new mean solar day could be computed and divided by 86400 - changing the length of the second in absolute terms, stretching or shrinking it very slightly.



That was a mess, as you can imagine. So the second was defined in terms of a specific number of atomic oscillations which could be made extremely precise. Instead of shrinking and stretching the second to keep an exact number of them in a day, we keep the second fixed and add or subtract one from the (integer) count when we need to adjust.



Those are pretty much the ways to keep earth rotation timing in sync with our clock time - you need some give somewhere, either by changing the length of the second and keeping the count fixed, or you keep the length fixed and change the count. For somebody just writing a simple program to, say, compute the civil seconds between two UTC timestamps, the old way was easier (a fixed count of seconds between two times is trivial). But if you are doing scientific or engineering calculations or experiments to great precision, it's WAY better to have a very firmly fixed length of a second, not changing it from time to time - much worse than the inconvenience of taking leap seconds into account.



But the way, another approach is to just ignore leap seconds and keep your clocks running continuously. That's how GPS time works - it started in sync with UTC, but has not been adjusted for the leap seconds since then, so they are out of sync by a quarter minute or so (I haven't check in some while). That's nice for GPS orbital calculations that cross leap second adjustment boundaries. In the GPS data packet there is information about the current delta between UTC and GPS time so you can calculate civil time from GPS time, as well as a few months advanced warning when a new leap second is going to be added or omitted.



Another answer suggested queuing up leap seconds and making a multi-second leap every decade. That doesn't really simplify your software much tho - now you have to allow minutes with, say, 67 seconds, every decade. Easier to just deal with leap seconds using a table and meanwhile never be off by even 1 second. (The standard allows for them to added or omitted by the way - you could have a 59 second minute or a 61 second minute when you need an adjustment. It's generally the latter tho.



Oh, one other solution. The organization which really tracked all this was called the International Earth Rotation Service, later renamed to International Earth Rotation and Reference Systems Service (IERS). Imagine the chaos if they stopped being funded and the Earth stopped rotating. Anyway, I suppose you could just ask them to rotate it more consistently. :-)

at.algebraic topology - Which local homeos to numerical space are bijective?

I am reading T. Szamuely's book on Galois groups and fundamental groups.
As preparation to the algebraic case, he recalls the topological case.
So I am wondering if a surjective local homeomorphism $f$ from some connected space $X$ to $mathbb{R}^n$ is necessarily a covering, in which case it would be bijective since $mathbb{R}^n$ is simply connected.



What about the differentiable case?

Monday, 11 February 2013

co.combinatorics - quiver mutation

First of all, I would like to note that there is a very nice applet, due to Keller, which mutates quivers (and does much more):
http://people.math.jussieu.fr/~keller/quivermutation/



Also, many information on cluster algebras (the definition of which requires quiver mutation) can be found at the cluster algebra portal
http://www.math.lsa.umich.edu/~fomin/cluster.html



Some very nice introductions and surveys to some of the theories which were developped thanks to cluster algebras and mutation are:



http://people.math.jussieu.fr/~keller/publ/KellerCatAcyclic.pdf



http://people.math.jussieu.fr/~keller/publ/KellerClusterAlgQuivRep.pdf



http://uk.arxiv.org/pdf/1012.4949.pdf



http://uk.arxiv.org/pdf/1012.6014.pdf



http://people.math.jussieu.fr/~keller/publ/KellerCYtriangCat.pdf



Here are a few examples of areas of research which are related to (or motivated by) Fomin-Zelevinsky's quiver mutation:



Cluster tilting theory (in representation theory of quivers and algebras);



Triangulations of punctured Riemann surfaces;



Higher Teichmuller spaces;



Poisson geometry;



In algebraic geometry: Stability conditions, Calabi-Yau algebras, Donaldson-Thomas invariants...



Let me give a few more details on cluster tilting:
The definition of a cluster algebra makes use of the notion of seed mutation. Quiver mutation is a part of this seed mutation. As an analogy, one can consider the flip of triangulations of an n-gone (To flip a triangulation, delete one of its arcs and replace it by the only arc giving a new triangulation). Through this analogy, seeds correspond to triangulations, and seed mutation to flips.



Now, in the representation theory of finite dimensional algebras, there is a notion of tilting modules. Such modules can sometimes be mutated at an indecomposable summand (as triangulations can be flipped at an arc), but not always: somme summands cannot be mutated. Moreover, there is a quiver naturally associated with such a module (the Gabriel quiver of its endomorphism algebra). Through a mutation, the associated quivers are related by Fomin-Zelevinsky's quivers mutation in some cases, but not always.



The whole theory of cluster tilting, including cluster categories and their generalisations, module categories over preprojective algebras, more general Calabi-Yau triangulated categories... arised from the (successful) attempt to fix these two problems in the relation between tilting theory and cluster algebras.



As a concrete application of this theory, on can cite Keller's proof of Zamolodchikov's periodicity conjecture:



http://people.math.jussieu.fr/~keller/publ/KellerPeriodicity.pdf

teaching - Founding of homological without quite involving derived categories

I am looking at the foundations of homological algebra, e.g. the introduction
of Ext and Tor, and am unsatisfied. The references I look at start with
"this is called a projective module, this is called a projective resolution,
now pick one and use it to define the right derived functors of your
left exact functor". I would like to see a presentation more along the
following lines:



  1. The functor Hom(A,*), applied to a short exact sequence of modules,
    doesn't produce another such. An oracle tells us that it does produce
    a long exact sequence; what could it be?


  2. We already know (from antiquity) that a short exact sequence of
    complexes induces a long exact sequence on cohomology.


  3. But in #1 we put in modules, not complexes. So let's fix that by hoping
    that Hom(A,*) can be extended in a natural way to the category of complexes
    (and really, to descend to the derived category).


  4. Such an extension might be required to have the following properties: ???


  5. Now I'd like it to be easy to see that the extension is unique if it
    exists. When is it easy to compute? At this point I'd like the definition
    of "projective module" to suggest itself.


  6. Finally, the usual boring checks that using projective resolutions
    to define it, the extension does indeed exist.


One way to answer this is to say "In part 4, define the derived category,
and its t-structure, then ask that the extension be exact in the
appropriate sense". I'm hoping to avoid going quite that far, or at least,
doing it in a way that doesn't involve introducing too many more definitions.

ct.category theory - General construction for internal hom in a presheaf category

The formula for the internal hom between presheaves $Fcolon C^{op}to Set$ and $Gcolon C^{op}to Set$ can be derived from the Yoneda lemma. Given $cin C$, we know that we must have $G^F(c) cong Hom(y(c), G^F) cong Hom(y(c) times F, G)$ so we can simply define $G^F(c) = Hom(y(c) times F, G)$, which is evidently a presheaf on $C$. The isomorphism $Hom(H,G^F)cong Hom(Htimes F, G)$ for non-representable $H$ then follows from the fact that every presheaf $H$ is canonically a colimit of representables, and $Hom(-,G^F)$ and $Hom(-times F,G)$ both preserve colimits (the former by definition of colimits, and the latter by that and since limits and colimits in presheaf categories are computed pointwise and products in $Set$ preserve colimits).



This is Proposition I.6.1 in "Sheaves in geometry and logic."

radio astronomy - Is there a cosmic, rather than technological, upper limit to what a telescope can resolve?

Space radio interferometers could have a baseline of millions of kilometers, but is there a point where a larger baseline doesn't improve the resolution anymore because the photons observed are distorted before they arrive? This question deals with technological limits of resolution. I'm instead asking about cosmic limitations due to for example interstellar and extragalactic gas which scatters light.



This paper about results from the RadioAstron space/Earth interferometer is well above my pay grade, but it seems to be about this problem. The executive summary says:




At longer baselines of up to 235,000 km, where no interferometric
detection of the scattering disk would be expected, significant
visibilities were observed with amplitudes scattered around a constant
value. These detections result in a discovery of a substructure in the
completely resolved scatter-broadened image of the pointlike source,
PSR B0329+54. They fully attribute to properties of the interstellar
medium.


Sunday, 10 February 2013

ag.algebraic geometry - Inverse limits of finite unramified covers, and the need for formally unramified covers

This question evolves mostly from my surprise at the answer to a previous question of mine (Is every flat unramified cover of quasi-projective curves profinite?). My surprise was at that inverse limits of finite unramified (even etale) covers (by cover I mean surjective morphism) of a scheme, X, are not necessarily unramified. This, of course, is in complete contrast to the topological case. As I understand it (and correct me if I'm wrong), being formally unramified is closed under inverse limits.



But still - this seems very odd to me. Is there any characterization of the points over which an inverse limit of unramified covers is ramified? Why does this happen? In what way would it ramify? I'm seeking a heuristic that would make me understand the benefits of being formally unramified over being unramified (and therefore also in what way being unramified differs from the topological analogy, in this respect).

ac.commutative algebra - Hochschild and cyclic homology of smooth varieties

Many of the standard sources which discuss the Hochschild Kostant Rosenberg theorem and cyclic homology for smooth varieties such as Loday and Weibel's paper "The Hodge Filtration and Cyclic Homology" ignore the positive characteristic case. Based upon these sources, I wasn't really sure if this was from a lack of knowledge or because the theorems are just not really that good in characteristic p. Here are a few rather simple(and hopefully correct!) observations about Hochschild homology and cyclic homology of smooth varieties over a field of characteristic p>0 which hopefully get the ball rolling. These are all trivial observations(as long as they are right) but they seem to suggest that there is some interesting math in the characteristic p>0 case, and I was wondering whether I had made a mistake or misunderstood something in the literature/what the opinion of experts was on these questions.



1) If $X$ is a smooth scheme over a field k of characteristic $p>0$, then we can prove the Hochschild Kostant Rosenberg
theorem as follows. The basic observation is that to compute $HH_*(X)=Tor_{O_{Xtimes X}}(Delta_{*}O_X,Delta_{*}OX)$ by the adjunction this is the same as $Tor_{O_X}(Delta^*Delta_*O_X,O_X)$ but the complex in the first argument is canonically
isomorphic to the tangent complex $bigoplusLambda^iT(X)[-i]$ as proven on page 247 in Huybrecht's book on the Fourier Mukai
transform.



2)If $A$ is a smooth commutative ring, again over a field k of characteristic $p>d$, the Krull dimension of the ring, then
all the arguments that are given in Loday's book regarding the relationship between de Rham cohomology and cyclic homology seem to work exactly the same when the characteristic $p>d$. In particular, the spectral sequence converging to cyclic homology still degenerates on the second page. This should lead to the following scheme theoretic theorem as well. The periodic cyclic homology is isomorphic to $prod H^*_{dr}$ if the characteristic $p>d$.



3) The above theorems seems to suggest that maybe the above degenerates for smooth algebras over a field, independent of the
characteristic. Somewhat independent of that one could wonder if the de Rham cohomology and periodic cyclic homology always agree for smooth
varieties over a field. Does anyone know of any counterexamples to this? Again, in the affine case, this might for example follow from a sort of Cartier isomorphism, optimistically a quasi-iso from $(C^*(A,A)((u)),d+uB) mapsto C^*(A,A)((u)),d)$. In the case of ordinary de Rham theory, for general schemes there are obstructions to realizing the Cartier isomorphism at the chain level like this--- but I think these obstructions all vanish for affine schemes, hence this guess. Anyways, I have the impression that Kaledin proved something like this, but I haven't had a chance to study it yet, so I thought I'd just ask the MO community.

Saturday, 9 February 2013

geometry - Cosine sum problem

Assume $sum_{i=1}^n a_i x_i=0$ with $x_i$ on the circle and $a_i>0$, $sum a_i=1$.
Then
$sum_{i=1}^nsum_{j=1}^n a_ilangle x_i,x_jrangle=0$.



It follows that for some $i$, $sum_{j=1}^nlangle x_i,x_jranglele 0$.



Since $langle x_i,x_irangle=1$, it implies
$sum_{j; jnot=i} langle x_i,x_jranglele -1$ which is what you want.

soft question - Favorite popular math book

I know this is a little late for Christmas, but nevertheless, I have a few (some of which have already been mentioned) books I've read that I've quite enjoyed. For the sake of brevity, I'll let you search the titles on Amazon for reviews and better descriptions.



Title: Everything & More: A Compact History of Infinity
Author: David Foster Wallace



Title: The Mathematical Experience
Author(s): Philip J Davis & Reuben Hersh



Title: One, Two, Three...Infinity
Author: George Gamow



Title: Pi in the Sky
Author: John D. Barrow



Title: Prime Obsession: Bernhard Riemann and the Greatest Unsolved Problem in Mathematics
Author: John Derbyshire



Title: Strength in Numbers
Author: Sherman Stein



Title: e: The Story of a Number
Author: Eli Maor



Title: A History of Pi
Author: Petr Beckmann



Title: Nature's Numbers
Author: Ian Stewart



Title: Mathematics: The Science of Patterns
Author: Keith Devlin



Title: Zero: The Biography of a Dangerous Idea
Author: Charles Seife



Title: How to Enjoy Calculus
Author: Eli S. Pine
(Not really a "popular" book, per se', but still pretty good)



Title: How to Think About Weird Things
Author(s): Theodore Schick & Lewis Vaughn
(Not really about mathematics, but not so far out of the way that you wouldn't enjoy it if you also enjoy mathematics)

Black Hole, Object or Portal?

There should be nothing special about crossing the "event horizon" - the line beyond which light cannot escape and so, in theory at least, one might "travel into a black hole" and not be aware of any physical changes at all.



However, I suspect what you mean is what would happen if one travelled to the centre of a black hole. Here we may speculate but our current theories of physics breakdown at this "singularity" and so we cannot know - nor is it easy to think of how we might design an experiment, given our currently available technologies, to test any theory we have.

Friday, 8 February 2013

gr.group theory - distribution of non-solvable group orders

The encyclopedia of integer sequences gives the following criteria for a number being a non-solvable number:



A positive integer n is a non-solvable number if and only if it is a multiple of any of the following numbers:



a) $2^p (2^{2p}-1)$, p any prime.



b) $3^p (3^{2p}-1)/2$, p odd prime.



c) $p(p^2-1)/2$, p prime greater than 3 and congruent to 2 or 3 mod 5



d) $2^4 3^3 13$



e) $2^{2p}(2^{2p}+1)(2^p-1)$, p odd prime.



It's not hard to check that all these orders are divisible by 4, so there will never be two non-solvable numbers differing by less than 4.



In fact, they're all divisible by 12 except those generated by (e), which are all divisible by 20.



So, for example, all numbers of the form 29120n are nonsolvable, since $29120 = 2^6 (2^6+1) (2^3-1)$. And all numbers of the form 25308n are nonsolvable, since $25308 = 37(37^2-1)/2$. We have the prime factorizations $25308 = 2^2 3^2 19^1 37^1$ and $29120 = 2^6 5^1 7^1 13^1$.



So we just need to find multiples of 29120 and 25308 which differ by 4. From the Euclidean algorithm, $29120 cdot 2483 = 72304960$ and $25308 cdot 2857 = 72304956$.



I haven't searched exhaustively, so it's possible that there's a smaller pair of non-solvable numbers that differ by 4; I chose 25308 and 29120 by just looking at the prime factorizations of the numbers generated by (a) through (e) until I found two that had gcd 4.

Thursday, 7 February 2013

extra terrestrial - How should one rationally deal with the issue of space travelling alien civilizations?

What kind of reasoning is appropriate to understand the as of today unanswered question of whether there are (other) interstellar space travelling civilizations in the Milky Way?



We have already sent probes towards the border of the Solar system. And even landed human beings on another celestial body and brought them home alive and well. If we extrapolate the 50 years of space travel, the 100 years of electronics (radio), the 400 years of physical science, to just a fraction of the biological age of humankind into the future (like a few thousands of years), interstellar travel is not out of the question for us or at least our artefacts. So I imagine two possible alternatives:



1) The Milky Way is cluttered by lots of space travelling civilizations like us and our future. Once one of them/us gets going, they'll soon be everywhere. The Sun orbits the Milky Way every 250 million years, about 2% of the age of the galaxy. Going to the nearest stars is enough to soon be everywhere. But if they are everywhere since almost always, they should be here, we should be their seed.



2) We are the only space travelling civilization in the entire galaxy, ever. But then what makes us unique? We consist of the most common elements and volatiles of the universe and our planet and star and galactic location all seem to be very typical. There's no known trace of any uniqueness here. Whatever could it be?



Are there more alternatives?



While we cannot say today which alternative is true, we should be able to at least specify the possible alternatives. But to me they all seem to be absurd! What would be a rational logical scientific approach to this apparent paradox?

Wednesday, 6 February 2013

amateur observing - Affordable night sky photography

As an amateur with limited budget, I'd be interested in taking photos of the night sky, trying to capture more detail than human eye armed with a lens of comparable parameters to what I have in my camera normally could see. I doubt I'd ever get down to details as fine as Jupiter's moons, but I'd hope to see detail of some nebulae I have a hard time seeing through my inexpensive telescope, stars too dim to notice in less-than-perfect conditions etc. I'm interested in taking full-sky images just as well as zooms on specific objects too.



Currently, I have a lower-end SLC camera, with two lenses - good sharpness though lower aperture with 50-120mm focal length, and a wide-angle, high-brightness one (about 12-50mm) currently. Firmware hacks allow me to take photos of arbitrarily long time, and I have the remote to start and stop it without touching the camera, and generally software-wise the camera is quite powerful. One of the lenses (the longer focal length) is of "standard professional" quality level too.



Is this sufficient to get started? If so, what kind of settings should I use? If not, what other kind of entry-level equipment would I need to obtain/build on budget to get started with night sky photography?