Thursday, 31 October 2013

uranus - What are the current accepted theories of the formation of the Uranian moon Miranda?

Miranda, a moon of Uranus is unique in that it has a very fractured surface



enter image description here



Source: University of Oregon



The surface is said to be jagged and fractured, with comparatively large disjointed cliffs and great faultlines.



What are the current accepted theories for the formation of such a strange world?



Please include authoritive links in any reply

Wednesday, 30 October 2013

nt.number theory - Recovering $Phi(n)$ from a multiple?

I've been attending a series of lectures on Cryptography from an engineering perspective, which means that most of the assertions made are supplied without proof... here's one that the lecturer couldn't recall the reason for, nor original source of.



Given an unfactored $n=pq$, computing $phi(n)$ is as hard as finding $p,q$; this is the key idea of various "RSA-like" cryptosystems. One presented had a step in which for a secret $k$ and a random $t$, $k-tphi(n)$ is transmitted. The claim was then that this process should only be applied once, as if an attacker sees $k-tphi(n)$ and $k-uphi(n)$ then they can recover $(t-u)phi(n)$, and it's alleged that this makes it easier to compute $phi(n)$.



So my question is, why is it easier to compute $phi(n)$ given a random multiple of it, assuming we're at "cryptographic size"? (that is, $p,q$ sufficiently large that it's not feasible to try and factor $n$ and $phi(n)$)

Tuesday, 29 October 2013

Why does Earth enjoy a significantly longer time as a habitable planet as compared to Mars or Venus?

By survived, do you mean remained habitable?



If so, then there are several reasons:



For Venus, the problem was that it was just barely on the edge of the habitable zone when the Sun was younger and less bright. As it aged and brightened towards the values we see today, any oceans began to evaporate. The resultant water vapour is a very powerful greenhouse gas, and so would have trapped yet more heat, causing a runaway greenhouse effect, until it is the baked, hellish world we see today.



Mars, on the other hand, was believed to be too small. It (probably, assuming modern theories are correct) started off as another habitable world, but since it was smaller, it would've had less residual heat left over from its birth, and would've cooled quicker. This would've caused the core to solidify, removing the magnetic field. This meant that the atmosphere was vulnerable to the solar wind, which would've gradually blown it - and any warming greenhouse gases - away, and with the lower pressure, the water would evaporate away as well, rendering Mars uninhabitable.

How to interpret this old degree notation?

The notation $(x;;y:z)$ seems to be $(30times 60)x+60y+z$ in minutes of arc. The calculation seems to take just the integer part rather than rounding. Thus, for example, step 8 is



$$begin{eqnarray}((64552 / 292207) times 360) - 3 &=& 79^circ lfloor 31.7rfloor' - 3' \ &=& (2times 30 + 19)^circ 28' \ &=& (2;;19:28)text{,}end{eqnarray}
$$



and step 10 is



$$134,sin(0:32) = lfloor 1.247rfloor = 1 = (0:01)text{.}$$



Some passages in the paper that point towards this interpretation:




(2) How does one interpret what the numerical values of the rӕk are taken to represent? It is universally agreed that the counting of the rasi (ราศี, signs of the zodiac) begins with Aries (Mesa) = 0; but the "r" at 0.



...



$^{12}$ Where desirable, values in arcmins are here converted to signs, degrees, and arcmins in order to make them compatible with following operations. Thus at stage C12, the value 258 arcmins becomes 0; 4, 18 to make it compatible with 2; 19, 28.




Since the zodiac partitions the celestial longitude into twelve $30^circ$ divisions, this makes sense, and seems to work out fairly straightforwardly though a bit cryptically, because the calculation mixes degrees and minutes: e.g., the minuend (first part) of Step 8 is in degrees, which is only implicitly converted to minutes of arc to be compatible with the subtrahend ($3'$).

Monday, 28 October 2013

ag.algebraic geometry - Finite group scheme acting on a scheme such that there is an orbit NOT contained in an open affine.

The standard example is as follows: Take a $3$-fold $X$; for example let $X=mathbb{P}^3$. Let $sigma$ be an automorphism of $X$ of order $n$; for example, $(x_0:x_1:x_2:x_3) mapsto (x_1:x_2:x_3:x_0)$. Let $C_1$, $C_2$, ..., $C_n$ be an $n$-gon of genus $0$ curves, with $C_i$ meeting $C_{i-1}$ and $C_{i+1}$ transversely and disjoint from the other $C_j$'s, with $sigma(C_i)=C_{i+1}$. For example, $C_1 = {(*:*:0:0) }$, $C_2 = {(0:*:*:0) }$, $C_3 = {(0:0:*:*) }$ and $C_4 = {(*:0:0:*) }$. Let $p_i=C_{i-1} cap C_{i}$.



Using this input data, we make our example. Take $X setminus { p_1, p_2, ldots, p_n }$ and blow up the $C_i$. Call the result $X'$. Also, take a neighborhood $U_i$ of $p_i$, small enough to not contain any other $p_j$. Blow up $C_i cap U_i$, then blow up the proper transform of $C_{i-1} cap U_i$. Call the result $U'_i$. Glue together $X'$ and the $U'_i$'s to make a space $Y$. Clearly, $sigma$ lifts to an action on $Y$.



There is a map $f: Y to X$. The preimage $f^{-1}(p_i)$ consists of two genus zero curves, $A_i$ and $B_i$, meeting at a node $q_i$. The $q_i$ form an orbit for $sigma$. We claim that there is no affine open $W$ containing the $q_i$.



Suppose otherwise. The complement of $W$ must be a hypersurface, call it $K$. Since $K$ does not contain $q_i$, it must meet $A_i$ and $B_i$ in finitely many points. Since $W$ is affine, $W$ cannot contain the whole of $A_i$ or the whole of $B_i$, so $K$ meets $A_i$ and $B_i$. This means that the intersection numbers $K cdot A_i$ and $K cdot B_i$ are all positive. We will show that there is no hypersurface $K$ with this property.



Proof: Let $x$ be a point on $C_i$, not equal to $p_i$ or $p_{i+1}$. Then $f^{-1}(x)$ is a curve in $Y$. As we slide $x$ towards $p_i$, that curve splits into $A_i cup B_i$. As we slide $x$ towards $p_{i+1}$, that curves becomes $A_{i+1}$. (Assuming that you chose the same convention I did for which one to call $A$ and which to call $B$.) Thus, $[A_i] + [B_i]$ is homologous to $A_{i+1}$. So $sum [A_i] = sum [A_i] + [B_i]$,
$sum [B_i] = 0$, and $sum B_i cdot K = 0$. Thus, it is impossible that all the $B_i cdot K$ are positive. QED



I think this example is discussed in one of the appendices to Hartshorne.

observation - Data for red-shifting

Red shift is usually measured for galaxies rather than individual stars. Unless a star has just gone supernova, it's usually not bright enough to be seen even w the world's most powerful telescopes at the distances where cosmological redshift comes into play. Hubble's law operates over large distances; the expansion constant being 67.8 km/sec per megaparsec (3.3 million light years)
Andomeda galaxy (M31) is 2.54 million light years away, and although some individual stars are visible at that distance, the galaxy has a net blueshift of 0.001001 (303 km/sec) due to its peculiar (i.e. non-Hubble) velocity.
Further out is where you start to see redshifts in excess of peculiar motion; and there you need to start using standard candles of some sort.
Cepheid variables when visible, Supernova, globular cluster brightness functions, are all used, as well as some spectral methods: Cosmic distance ladder



Supernova measurements go the furthest out, highest redshift, but are still a bit of a can of worms as far as interpretation goes.

Sunday, 27 October 2013

ag.algebraic geometry - Wild Ramification

Let me give a low-brow answer to the question, and begin with my earlier answer (which got a couple of downvotes, so you have to take it with a pinch of salt). There I discussed quadratic extensions.



I'm assuming that your base field $K$ is a finite extension of $mathbb{Q}_p$ or of $mathbb{F}_p((pi))$, where $p$ is a prime number and $pi$ is transcendental. (Very little will change if you allow $K$ to be a field complete for a discrete valuation with perfect residue field.) Let $k$ be the residue field of $K$.



Finite extensions $L|K$ can be unramified, (at worst) tamely ramified or wildly ramified.. The three cases correspond to $e=1$, $operatorname{gcd}(e,p)=1$, $p|e$, where $e$ is the ramification index of $L|K$. There are uniquely determined subfields $Ksubset L_0subset L'subset L$ such that $L_0|K$ is unramified, $L'|L_0$ is totally but tamely ramified, and $L|L'$ is totally ramified of degree $p^s$ for some $sinmathbb{N}$, so it is wildly ramified if $s>0$. (For me $0inmathbb{N}$; I want it to be an additive monoid.)



Unramified extensions can be completely understood in terms of extensions of the residue field $k$. When $k$ is finite as here, there is only one extension in each degree $n$, and it is obtained by adjoining a primitive $(q^n-1)$-th root of $1$, where $q=operatorname{Card}(k)$. It follows that the maximal unramified extension of $K$ is obtained by adjoining primitive roots of $1$ of order prime to $p$.



Tamely ramified extensions are only slightly more complicated. It is not hard to show that if $L|K$ is totally but tamely ramified of degree $n$, then $L=K(root nofvarpi)$ for some uniformiser $varpi$ of $K$, and not hard to determine when two uniformisers $varpi$ and $varpi'$ give the same extension. See for example Lecture 18 in my online notes arXiv:0903.2615. As shown there, the maximal tamely ramified extension $T|K$ is obtained by adjoining $root nof1$ and $root nofvarpi$ for all $n>0$ prime to $p$, where $varpi$ is a fixed uniformiser of $K$. This allows you to write a simple presentation for the profinite group $operatorname{Gal}(T|K)$ in which the generators have some arithmetic significance.



(If your base field had been $mathbb{C}((t))$, all whose finite extensions are totally but tamely ramified, you would have been able to conclude at this point that an algebraic closure is obtained by adjoining an $n$-th root of $t$ for every $n>0$.)



What are all totally ramified extensions $L|K$ of degree $p^s$ ? Very little is known about the question. It is easy to see that there are only finitely many $L$ in the mixed characteristic case, and infinitely many $L$ in the equicharacteristic case, but there are exactly $p^s$ extensions when counted properly.



As Pete says, the abelian ones are given by local class field theory (in terms of index-$p^s$ subgroups of $K^times$). (But even for $s=1$ there are extensions which are not galoisian, and I don't know them all.) The exponent-$p$ ones can be understood not only in terms of Class Field Theory, but also Kummer Theory or Artin-Schreier Theory.



The question is, loosely put, what is known about wild ramification?



The answer is, loosely put, not much. (Joking, eh ?)



Addendum (26/2/2010). Yesterday I came across a recent theorem of Abrashkin which says, roughly speaking, that if you know all the wildly ramified extensions (along with their filtration), then you know the local field.



More precisely, let $K$ be a local field of residual characteristic $p$, and $P|K$ the maximal pro-$p$-extension of $K$ --- the compositum of all $p$-extensions of $K$. The profinite group $G=operatorname{Gal}(P|K)$ comes with the ramification filtration (in the upper numbering).



If $K',P',G'$ is another such triple, and if $varphi:Kto K'$ is an isomorphism of local fields, then it induces an isomorphism of filtered pro-$p$-groups $Gto G'$.



Abrashkin's theorem says that, conversely, every isomorphism of filtered pro-$p$-groups $Gto G'$ comes from an isomorphism $Kto K'$ of local fields. In other words, the local field $K$ is completely determined by the filtered pro-$p$-group $G$. See Theorem A in his recent paper



This is a refinement of an earlier theorem of Mochizuki, who worked with $operatorname{Gal}(tilde K|K)$, where $tilde K$ is a separable closure of $K$.

algorithms - Finding a subgraph with slightly large size in planar graphs

It seems very plausible to me that the $k$-path problem is in $2^{O(sqrt{k})}poly(n)$ time on planar graphs. Other parameterized subgraph problems (e.g. $k$-vertex cover) are known to exhibit such algorithms, so why not? But I don't know of any further progress in this direction.



For general directed graphs, solving the $omega(log^2 n)$-path problem in polynomial time is known to be "ETH-hard", meaning that such an algorithm would imply that 3SAT is in subexponential time. This was proved by Bjorklund, Husfeldt, and Kanna, and the paper can be found here: http://repository.upenn.edu/cis_papers/205/




Andreas Björklund, Thore Husfeldt, Sanjeev Khanna: Approximating Longest Directed Paths and Cycles. ICALP 2004: 222-233




In the case of general undirected graphs, this is open (as far as I know). A recent algorithm of Gabow and Nie has the property that if there is an $ell$-cycle in a given undirected graph, then the algorithm can find a cycle of length $exp(Omega(sqrt{log ell}))$ in polynomial time. So for general Hamiltonian graphs, you can find $log^2 n$ length paths efficiently.




Harold N. Gabow, Shuxin Nie: Finding Long Paths, Cycles and Circuits. ISAAC 2008:752-763




I don't know what bearing this has on the planar case, but it certainly seems relevant.

linear algebra - Linearity of the inner product using the parallelogram law

To me continuity is more geometric and intuitive than the rest of the argument (which is purely algebraic manipulation). So I take the liberty to mis-read you question as follows:



  • Is it possible to derive linearity of the inner product from the parallelogram law using only algebraic manipulations?

By "only algebraic" I mean that you are not allowed to use inequalities. (It is triangle inequality that allows one to use continuity. In fact, one can derive continuity using only the inequality $|u|^2ge 0$ and the parallelogram law.) Also, an algebraic argument must work over any field on characteristic 0.



The answer is that it is not possible. More precisely, the following theorem holds.



Theorem. There exists a field $Fsubsetmathbb R$ and a function $langlecdot,cdotrangle: F^2times F^2to F$ which is symmetric, additive in each argument (i.e. $langle u,v+wrangle=langle u,vrangle+langle u,wrangle$), satisfies the identity $langle tu,tvrangle = t^2langle u,vrangle$ for every $tin F$, but is not bi-linear.



Note that the above assumptions imply that the "quadratic form" $Q$ defined by $Q(v)=langle v,vrangle$ satisfies $Q(tv)=t^2Q(v)$ and the parallelogram identity, and the "product" $langlecdot,cdotrangle$ is determined by $Q$ in the usual way. [EDIT: an example exists for $F=mathbb R$ as well, see Update.]



Proof of the theorem. Let $F=mathbb Q(pi)$. An element $xin F$ is uniquely represented as $f_x(pi)$ where $f_x$ is a rational function over $mathbb Q$. Define a map $D:Fto F$ by $D(x) = (f_x)'(pi)$. This map satisfies



Define $P:Ftimes F$ by $P(x,y) = xD(y)-yD(x)$. From the above identities it is easy to see that $P$ is additive in each argument and satisfies $P(tx,ty)=t^2 P(x,y)$ for all $x,y,tin F$. Finally, define a "scalar product" on $F^2$ by
$$
langle (x_1,y_1), (x_2,y_2) rangle = P(x_1,y_2) + P(x_2,y_1) .
$$
It satisfies all the desired properties but is not bilinear: if $u=(1,0)$ and $v=(0,1)$, then $langle u,vrangle=0$ but $langle u,pi vrangle=1$.



Update. One can check that if $langlecdot,cdotrangle$ is a "mock scalar product" as in the theorem, then for any two vectors $u,v$, the map $tmapsto langle u,tvrangle - tlangle u,vrangle$ must be a differentiation of the base field. (A differentiation is map $D:Fto F$ satisfying the above rules for sums and products.) Thus mock scalar products on $mathbb R^2$ are actually classified by differentiations of $mathbb R$.



And non-trivial differentiations of $mathbb R$ do exist. In fact, a differentiation can be extended from a subfield to any ambient field (of characteristic 0). Indeed, by Zorn's Lemma it suffices to extend a differentiation $D$ from a field $F$ to a one-step extension $F(alpha)$ of $F$. If $alpha$ is transcedental over $F$, one can define $D(alpha)$ arbitrarily and extend $D$ to $F(alpha)$ by rules of differentiation. And if $alpha$ is algebraic, differentiating the identity $p(alpha)=0$, where $p$ is a minimal polynomial for $alpha$, yields a uniquely defined value $D(alpha)in F(alpha)$, and then $D$ extends to $F(alpha)$. The extensions are consistent because all identities involved can be realized in the field of differentiable functions on $mathbb R$, where differentiation rules are consistent.



Thus there exists a mock scalar product on $mathbb R^2$ such that $langle e_1,e_2rangle=0$ but $langle e_1,pi e_2rangle=1$. And I am sure I reinvented the wheel here - all this should be well-known to algebraists.

Saturday, 26 October 2013

Is there any planet/star bigger than VY Canis Majoris?

Stars
Estimates on the size of stars are just that, estimates, and estimates based on rather fuzzy observations. VY Canis Majoris has been bumped down to size. The current thinking is that there are seven known stars larger than VY Canis Majoris, the largest of which is UY Scuti.



Current models indicate that the first generation of stars were much, much larger than anything we see now. It will be quite some time before we can resolve a first generation star. So far, they are just theoretical objects.



Planets
Jupiter-mass planets are about as large as a planet can get. There are some exoplanets that are larger than Jupiter, but that's because they orbit much closer to their parent star than does Jupiter. This makes them puff up a bit. The reason Jupiter-mass planets are deemed to be the largest possible is that planets of this mass are presently assumed to have a core of degenerate hydrogen. A funny thing happens to degenerate masses when mass is added to them: They shrink in diameter. (The shrinkage becomes catastrophic as the mass approaches the Chandrasekhar limit.)



This means that assuming all other things are equal (temperature, composition), a planet more massive than Jupiter will be small in diameter than Jupiter is. Even if all other things aren't equal, a Jupiter-diameter planet is (give or take) about as large as planet can get.





Stars
In terms of mass, VY Canis Majoris doesn't even make the top ten, not even close! The most massive known star is R136a1. Again, these are estimates, but mass is a bit easier to pin down than is radius (or diameter).



As is the case with physical extent, the first generation stars are presently modeled as being much, much more massive than anything we see now.



Planets
There's not much difference between the largest planet and the smallest brown dwarf. I would argue there's very little difference. It's a spectrum with no distinguishing characteristic that lets one say "this is a planet" and "that is a brown dwarf". Ignoring the distinction between super-Jupiters and brown dwarfs the largest is about 80 Jupiter masses. V1581 Cygni C is 79 Jupiter masses. (More massive than that and they start burning hydrogen, thus making them a small red dwarf.)



The current factor that is used to distinguish between brown dwarfs and super Jupiters is mass. Anything larger than 13 Jupiter masses is a brown dwarf, anything smaller, a planet. That boundary is very arbitrary.

gn.general topology - Do the empty set AND the entire set really need to be open?

If the empty set and the whole space are not open, then many statements you would like to make about open sets need qualifying remarks. It really can happen that two open sets are disjoint (two open balls that are far apart) or their union is the whole space (an appropriate pair of open half-planes that overlap). If the empty set were not open then
we would have to say that any finite intersection of open sets is open or is empty. You'd have to tack on "or empty" in a lot of statements (e.g., the complement of a closed set is open or is empty... I assume you would like to call the whole space closed?). It is easier to allow the empty set as an open set to avoid a profusion of "or empty" qualifiers in theorems.



If, as has been suggested in a comment, the issue being raised is whether or not that first axiom about topologies is simply redundant, it isn't. Without that axiom we could consider any single subset of a space as a topology on the space: that one set is closed under arbitrary unions and finite intersections of itself. In that setting the concept of an open cover loses its meaning, so it really seems like a dead end.



Edit: Without the whole space being allowed as open (which can happen for "topologies" without that first axiom), there need not be open coverings, and then the usefulness of point-set topology is seriously damaged.

space time - Our universe the surface of a 4-dimensional sphere?

The surface of the 4-dimensional ball (called 3-sphere) is a slice through the universe as a whole for a fixed cosmic time. This slice describes just three spatial dimensions. The observable universe is a tiny part of this 3-sphere; hence it looks flat (3-dimensional Euclidean space) up to measurement precision (about 0.4% at the moment).



Adding time makes the universe 4-dimensional. The observable part is similar to a 3+1-dimensional Minkowski space-time. The universe as a whole may be a de Sitter space-time. A de Sitter space-time is the analogon of a sphere ( = surface of a ball), embedded in a Minkowski space, instead of an Euclidean space, but it's not literally a sphere.



If time would be taken as an additional spatial dimension, the de Sitter space would ressemble a hyperboloid of revolution, if embedded in a 5-dimensional hyperspace.
The difference to an Euclidean space is due to the different definition of the distance: In a 4-dimensional Euclidean space the distance between two points is defined by
$l = sqrt{Delta x^2+Delta y^2+Delta z^2+Delta t^2};$ for a 3+1-dimensional Minkowski space it's $l = sqrt{Delta x^2+Delta y^2+Delta z^2-Delta t^2}.$ For simplicity the speed of light has been set to $1$.



This model of the universe as a whole can hold, if it actually originated (almost) as a (0-dimensional) singularity (a point); but our horizon is restricted to the observable part, so everything beyond is a theoretical model; other theoretical models could be defined in a way to be similar in the observable part of the universe, but different far beyond, see e.g. this Planck paper.

Friday, 25 October 2013

history - How did our ancestors discover the Solar System?

Night skies are naturally dark and there was no light-pollution in ancient times. So if weather permits, you can easily see a lot of stars. No need to tell about the Sun and the Moon.



Ancient people had good reasons to study the night skies. In many cultures and civilizations, stars (and also the Sun and the Moon) where perceived to have religious, legendary, premonitory or magical significance (astrology), so a lot of people were interested in them. It did not took long to someone (in reality a lot of different people independently in many parts of the world) to see some useful patterns in the stars that would be useful to navigation, localization, counting hours, counting days and relating days to seasons, etc. And of course, those patterns in the stars were also related to the Sun and the Moon.



So, surely all ancient cultures had people who dedicated many nights of their lifes to study the stars in detail right from the stone age. They would also perceive meteorites (falling stars) and eclipses. And sometimes a very rare and spetacular comet.



Then there are the planets Mercury, Venus, Mars, Jupiter and Saturn. They are quite easily to notice to be distinct from the stars because all the stars seems to be fixed in the celestial sphere, but the planets don't. They are very easily to notice to be wandering around in the sky with the passage of the days, specially for Venus, which is the brightest "star" in the sky and is also a formidable wanderer. Given all of that, the ancient people surely become very aware of those five planets.



About Mercury, initially the Greeks thought that Mercury were two bodies, one that showed up only in the morning a few hours before the sunrise and another only a few hours after the sunset. However, soon they figured out that in fact it was only one body, because either one or the other (or neither) could be seen in a given day and the computed position of the unseen body always matched the position of the seen body.





Now, out of the stone age, already into ancient times, navigators and merchants who travelled great distances perceived that the Sun rising and setting points could variate not only due to the seasonal variation, but also accordingly to the location. Also, the distance from the polar star to the horizon line also variates accordingly with the location. This fact denounces the existence of the concept nowadays known as latitude, and this was perceived by ancient astronomers in places like Greece, Egypt, Mesopotamia and China.



Astronomers and people who dependend on astronomy (like navigators) would wonder why the distance from the polar star to the horizon varied, and one possibility was that it is because the Earth would be round. Also, registering different Sun angles in different locations of the world on a given same day and at a given same hour, also gives a hint that the Earth is round. The shadow on the Moon during a lunar eclipse also gives a hint that the Earth is round. However, this by itself is not a proof that the Earth is round, so most people would bet on some other simpler thing, or simply don't care about this phenomenon.



Most cultures in ancient times presumed that the world was flat. However the idea of the world being round exists since the ancient Greece. Contrary to the popular modern misconception, in the Middle Ages, almost no educated person on the western world thought that the world was flat.



About the Earth's size, by observing different Sun positions and shadows angles in different parts of the world, Erasthotenes in ancient Greece calculated the size of Earth and the distance between the Earth and the Sun correctly for the first time as back as the third century B.C. However, due to the confusion about all the different and inconsistent unit measures existent back then and the difficulty to precisely estimate long land and sea distances, confusion and imprecision persisted until the modern times.



Ancient cultures also figured out that the shiny part of the Moon was illuminated by the Sun. Since the Full Moon is easily seen even at midnight, this implies that the Earth is not infinite. The fact that the Moon enters in a rounded shadow when exactly in the opposite side of the sky as the Sun also implies that it is the Earth's shadow on the Moon. This also implies that Earth is significantly larger than Moon.





So, people observed the Sun, Moon, Mercury, Venus, Mars, Jupiter, Saturn and the fixed sphere of stars all revolving around the sky. They naturally thought that the Earth would be the center of the universe and that all of those bodies revolved around the Earth. This culminated with the work of the phylosopher Claudius Ptolemaeus about geocentrism.



Altough we now know that the ptolomaic geocentric model is fundamentally wrong, it could be used to compute the position of the planets, the Sun, the Moon and the celestial sphere of stars, with a somewhat acceptable precision at the time. It accounted to include the observation of planets velocity variations, retrograde motions and also for coupling Mercury and Venus to the Sun, so they would never go very far from it. Further, based on the velocity of the motion of those bodies in the sky, then the universe should be something like:



  • Earth at the center.

  • Moon orbiting the Earth.

  • Mercury orbiting the Earth farther than the Moon.

  • Venus orbiting the Earth farther than Mercury.

  • Sun orbiting the Earth farther than Venus.

  • Mars orbiting the Earth farther than Sun.

  • Jupiter orbiting the Earth farther than Mars.

  • Saturn orbiting the Earth farther than Jupiter.

  • The celestial sphere of stars rotating around the Earth, being the outermost sphere.

In fact, the ptolomaic model is a very complicated model, way more complicated than the copernic, keplerian and newtonian models. Particularly, this could be compared to softwares that are based on severely flawed concepts but that are still working due to a lot of complex, tangled and unexplainable hacks and kludges that are there just for the sake of making the thing work.





Marco Polo, in the last years of the 1200's, was the first European to travel to China and back and leave a detailed chronicle of his experience. So, he could bring a lot of knowledge about what existed in the central Asia, the East of Asia, the Indies, China, Mongolia and even Japan to the Europeans. Before Marco Polo, very few was known to the Europeans about what existed there. This greatly inspired European cartographers, philosophers, politicians and navigators in the years to come.



Portugal and Spain fough a centuries-long war against the invading Moors on the Iberian Peninsula. The Moors were finally expelled in 1492. The two states, were looking for something profitable after so many years of war. Since Portugal ended its part of the war first, it had a head start and went to explore the seas first. Both Portugal and Spain were trying to find a navigation route to reach the Indias and the China in order to trade highly profitable spices and silk. Those couldn't be traded by land efficiently anymore due to the fact that the lands on West Asia and North Africa were dominated by Muslim cultures unfriendly to Christian Europeans, a situation that were just made worse after the fall of Constantinople in 1453.



Portugal, were colonizing the Atlantic borders of Africa and eventually they managed to reach the Cape of Good Hope in 1488 (with Bartolomeu Dias).



A Genovese navigator called Cristoforo Colombo believed that if he sailed west from the Europe, he could eventually reach the Indies from the east side. Inspired by Marco Polo and subestimating the size of Earth, he estimated that the distance between the Canary Islands and the Japan to be 3700 km (in fact it is 12500 km). Most navigators would not venture in such voyage because they (rightly) tought that Earth was larger than that.



Colombo tried to convice the king of Portugal to finance his journey in 1485, but after submitting the proposal to experts, the king rejected it because the estimated journey distance was too low. Spain, however, after finally expelling the Moors in 1492, were convinced by him. Colombo's idea was far-fetched, but, after centuries of wars with the Muslims, if that worked, then Spain could profit quickly. So, the Spanish king approved the idea. And just a few months after expelling the Moors, Spain sent Colombo to sail west towards the Atlantic and then he reach the Hispaniola island in Central America. After coming back, the news about the discovery of lands in the other side of the Atlantic spread quickly.



Portugal and Spain then divided the world by the Treaty of Tordesillas in 1494. In 1497, Amerigo Vespucci reached the mainland America.



Portugal would not be left behind, they managed to navigate around Africa to reach the Indies in 1498 (with Vasco da Gama). And they sent Pedro Álvares Cabral, who reached the Brazil in 1500 before crossing the Atlantic back in order to go for the Indies.



After that, Portugal and Spain quickly started to explore the Americas and eventually colonize them. France, England and Netherlands also came to the Americas some time later.





After, the Spanish discovered and settled into the Americas (and Colombo's plan in fact didn't worked). The question that if it was possible to sail around the globe to reach the Indies from the east side remained open and the Spanish were still interested on it. They eventually discovered the Pacific Ocean after crossing the Panama Ishtums by land in 1513.



Eager to find a maritime route around the globe, the Spanish crown funded an expedition leadered by the Portuguese Fernão de Magalhães (or Magellan as his name was translated to English) to try to circle the globe. Magellan was an experienced navigator, and had reached what is present day Malaysia traveling through the Indian Ocean before. They departed from Spain in September 20th, 1519. It was a long and though journey that costed the lives of most of the crew. Magellan himself did not survived, having died in a battle in the Phillipines on 1521. At least, he lived enough to be aware that they in fact reached East Asia by traveling around the globe to the west, which also proves that the Earth is round.



The journey was eventually completed by the leadership of Juan Sebatián Elcano, one of the crewmen of Magellan. They reached Spain back through the Indian and Atlantic Oceans on September 6th, 1522 after traveling for almost three years a distance of 81449 km.





There were some heliocentric or hybrid geo-heliocentric theories in ancient times. Notably by the Greek philosopher Philolaus in the 5th century BC. By Martianus Capella around the years 410 to 420. And by Aristarchus of Samos around 370 BC. Those models tried to explain the motion of the stars as rotation of the Earth and the position of the planets, specially Mercury and Venus as translation around the Sun. However those early models were too imprecise and flawed to work appropriately, and the ptolomaic model still was the model with the better prediction of the positions of the heavenly bodies.



The idea that the Earth rotates was much less revolutionary than heliocentrism, but was already more-or-less accepted with reluctancy in the middle ages. This happens because if the stars rotated around Earth, they would need do so at an astonishing velocity, dragging the Sun, the Moon and the planets with it, so it would be easier if Earth itself rotated. People were uncomfortable with this idea, but they still accepted it, and this became easier to be accepted after the Earth sphericity was an established concept.



In the first years of the 1500's, while the Portuguese and Spanish were sailing around the globe, a polish and very skilled matemathical and astronomer called Nikolaus Kopernikus took some years thinking about the mechanics of the heavenly bodies. After some years making calculations and observations, he created a model of circular orbits of the planets around the Sun and perceived that his model were much more simpler than the ptolomaic geocentric model and was at least as precise. His model also features a rotating Earth and fixed stars. Further, his model implied that the Sun was much larger than the Earth, something that was already strongly suspected at the time due to calculations and measurements and also implied that Jupiter and Saturn were several times larger than Earth, so Earth would definitively be a planet just like the other five then-known planets were. This could be seen as the borning of the model today know as Solar System.



Fearing persecution and harsh criticism, he avoided to publish many of his works, sending manuscripts to only his closest acquaintances, however his works eventually leaked out and he was convinced to allow its full publication anyway. Legend says that he was presented to his finally fully published work in the very day that he died in 1543, so he could die in peace.



There was a heated debate between supporters and oppositors of Copernic's heliocentric theory in the middle 1500's. One argument for the opposition was that star parallaxes could not be observed, which implied that either the heliocentric model was wrong or that the stars were very very far and many of them would be even larger than the Sun, which seemed to be a crazy idea at the time.



Tycho Brache, which did not accepted heliocentrism, in the latest years of the 1500's tried to save geocentrism with an hybrid geo-heliocentric model that featured the five heavenly planets orbiting the Sun while the Sun and the Moon orbited Earth. However, he also published a theory which better predicted the position of the Moon. Also, by this time, the observation of some supernovas showed that the celestial sphere of the stars were not exactly immutable.



In 1600, the astronomer William Gilbert provided strong argument for the rotation of Earth, by studing magnets and compasses, he could demonstrate that the Earth was magnetic, which could be explained by the presence of enourmous quantities of iron in its core.





All of what I wrote above happened without telescopes, only by using naked eye observations and measurements around the globe. Now, add even some small telescopes and things change quickly.



The earliest telescopes were invented in 1608. In 1609, the astronomer Galieu Galilei heard about that, and constructed his own telescope. In January of 1610, Galieu Galilei, using a small telescope, observed four small bodies orbiting Jupiter at different distances, figuring out that they were Jupiter's "moons", he also could predict and calculate its positions along their orbits. Some months later, he also observed that Venus had phases as seen from the Earth. He also observed Saturn's rings, but his telescope was not powerful enough to resolve them as rings, and he tought that they were two moons. These observations were incompatible with the geocentric model.



A contemporary of Galilei, Johannes Kepler, working on Copernicus' heliocentric model and making a lot of calculations, in order to explain the differing orbital velocities, created an heliocentric model where the planets orbits the Sun in elliptic orbits with one of the ellipse's focus in the Sun. His works were published in 1609 and 1619. He also suggested that tides were caused by the motion of the Moon, though Galilei was skeptical to that. His laws predicted a transit of Mercury in 1631 and of Venus in 1639, and such transit were in fact observed. However, a predicted transit of Venus in 1631 could not be seen due to imprecision in calculations and the fact that it was not visible in much of the Europe.



In 1650 the first double star were observed. Further in the 1600's, the Saturn rings were resolved by the use of better telescopes by Robert Hooke, who also observed a double star 1664 and developed microscopes to observe cellular structures. From them on, many stars were discovered to be double. In 1655, Titan were discovered orbiting Saturn, putting more confidence on the heliocentric model. More four Saturnian moons were discovered between 1671 and 1684.





Heliocentrism was reasonably well-accepted in the middle 1600's, but people was not confortable with it. Why the planets orbits the Sun afterall? Why the Moon orbits Earth? Why Jupiter and Saturn had moons? Although Keplerian mechanics could predict their moviment, it was still unclear what was the reason that makes them move that way.



In 1687, Isaac Newton who was one of the most brilliant physic and mathematic that ever lived (although he was also an implacable persecutor of his opponents), provided the gravitational theory (based on prior work by Robert Hooke). Ideas for the gravitation theory and the inverse square law already were developed in the 1670's, but he could publish a a very simple and clear theory for gravitation, very well-fundamented in physics and mathematics and it explained the motions of the celestial bodies with a great precision, including comets. It also explained why the planets, the Moon and the Sun are spherical, explained tides and it also served to explain why things falls to the ground. This made heliocentrism to be definitely widely accepted.



Also, Newton gravitational law predicted that Earth rotation would make it not exactly spherical, but a bit ellipsoidal by a factor of 1:230. Something that agreed with measures done using pendulums in 1673.





In the early 1700's, Edmund Halley, already knowing about newtonian laws (he was a contemporary of Newton) perceived that comets who passed near Earth would eventually return, and he found that there was a particular instance of sightings every 76 years, so he could note that those comets in reality were all the same comet, which is called after him.



The only remaining problem with the heliocentric model was the lack of observation of parallax to the stars. And nobody knew for sure what the stars were. However, if they in fact are very distant bodies, most of them would be much larger than the Sun. At the first half of the 1700's, trying to observe parallax, James Bradley perceived phenomena like the aberration of light and the Earth's nutation, and those phenomena also provides a way to calculate the speed of light. But the observation of parallax remained a challenge during the 1700's.



In 1781, Uranus were discovered orbiting the Sun beyond Saturn. Although barely visible to the naked eye in the darkest skies, it was so dim that he escaped observation from astronomers until then, and so were discovered with a telescope. The first asteroids were also discovered in the early 1800's. Investigation on pertubations on Uranus' orbit due to the predicted newtonian and keplerian movement eventually leaded to the discovery of Neptune in 1846.



In 1838, the astronomer Friedrich Wilhelm Bessel who measured the position of more than 50000 stars with the greatest precision as possible, could finally measure the parallax of the star 61 Cygni successfully, which proved that stars were in fact very distant bodies and that many of them were in fact larger than the Sun. This also demonstrates that the Sun is a star. Vega and Alpha Centauri also had their parallaxes measured successfully in 1838. Further, those measurements permitted to estimate the distance between those stars and the Solar System to be on the order of many trillions of kilometers, or several light-years.

data analysis - Deriving Dark Matter; specifically looking for a table of stellar rotational speed versus distance from center of galaxy to derive dark matter

I'm trying to find some data that I can use to derive dark matter (in a loosey-goosey sort of way, I won't be too rigorous). I'm helping a friend out with a final project for his astronomy course and I'm tasked with doing the maths. Any idea where I can find a table of various stars distance from the center of our galaxy and their rotational speed (how fast they are going around the center of the galaxy)? This does not need to be comprehensive but would ideally contain several data sources from various representative radii. Also some sort of formatted text would be SUPER cool so I could just tell my computer to do the calculations!



Thanks!

linear algebra - Matrix Logarithms are Not Unique

In my ODE class, we proved that if exp(L) = exp(L') then the eigenvalues are congruent mod 2πi. Here L, L' are two nxn matrices. I wanted to know if something more precise was true.



In a way, we should expect that matrix logs are multiple valued since this is the case in $mathbb{C}$. log(re) = log r + iθ + 2πik with $k in mathbb{Z}$. In this way we can construct an branched infinite cover of the complex plane.



We'll define multiplcity mod 2πi of an eigenvalue λ to be the number of eigenvalues congruent to λ mod 2πi up to multiplicity. If exp(L) = exp(L') are the spectra of L and L' the same including multiplicity mod 2πi?



To put this another way, I could imagine two 5x5 matrices exp(L) = exp(L') where



  • the spectrum of L is (λ1, λ1, λ2, λ2 + 2πi, λ2 + 4πi)

while



  • the spectrum of L' is (λ1, λ1, λ1, λ2 + 2πi, λ2 + 4πi)

Here the multiplicities mod 2πi are different. One would be (2,3) while the other would be (3,2). Could exp(L) = exp(L') in this case?

Thursday, 24 October 2013

ag.algebraic geometry - Etale covers of the affine line

As an excuse to talk about one of my favorite results, I thought I'd put this out there (even though I've already mentioned this to Tyler privately).



Abhyankar conjectured that the the collection of finite quotients of the étale fundamental group of the affine line in characteristic $p$ are exactly the quasi-$p$-groups. This was proved by Raynaud (as mentoned above). A slightly more complicated statement (for general curves) was quickly thereafter proved by Harbater.



Here's an even more interesting (to my mind) result:



Suppose $X$ a geometrically connected, projective variety of dimension over any field $K$ of positive characteristic. Suppose $L$ an ample line bundle on $X$, $D$ a closed subscheme of dimension less than $n$, and $S$ a $0$-dimensional subscheme of the regular locus of $X$ not meeting $D$. Then there exists a positive integer $r$ and an $(n+1)$-tuple of linearly independent sections of $L^{otimes r}$ with no common zero such that the induced finite morphism $f : X to P^n_K$ of $K$-schemes meets the following conditions.



(1) If $H$ denotes the hyperplane at infinity, then $f$ is étale away from $H$.



(2) The image $f(D)$ is contained in $H$.



(3) The image $f(S)$ does not meet $H$.



This was proved by Abhyankar in dimension $1$, and the general result is due to Kedlaya. The proof is just gorgeous; it's even simpler than his first paper on the subject, which only works for infinite fields $K$.



This says something pretty remarkable: even though, in characteristic $0$, affine spaces are simply connected, in positive characteristic, every variety contains a Zariski open that is an étale cover of affine space! (Katz uses this kind of trick in his notes on Weil II.)

Wednesday, 23 October 2013

galaxy - Quasars and SMBH

Very good question. The number of quasars must be less than the number of SMBHs, since many galaxies, such as our own, contain SMBHs at their core (Sagittarius A*) and they are not classified as quasars (i.e., some galaxies are quiescent for whatever reason). Quasars represent an ultra-luminous active phase of gas accretion onto the SMBH. Such larger luminosities is believed to be caused by intense gas accretion triggered by major massive scale mergers between galaxies.



As such, Quasars are short lived events, and the SMBHs outlive the quasar (the lifetime of a quasar is of the order of 10$^6$-10$^9$ yr whereas the lifetime of a SMBH is much greater than the Hubble time). Hence, once the gas is all but consumed by such intense accretion the Quasar will slowly become quiescent. Hence why many galaxies contain SMBHs at their cores, but are no longer active.

big list - Essential theorems in group (co)homology


  1. Interpretation of cohomology of small degree:

$H^1(G,A)$ = crossed homomorphisms $Gto A$ modulo principal ones.



$H^2(G,A)$ = equivalence classes of extensions of G by A.



$H^3(G,Center(G))$ = obstructions to existence of extensions of G by A.



2.
Transfer and its applications: If $G$ is finite then



1) $H^i(G,M)$ is a torsion group annihilated by multiplication by $|G|$.



2) Embedding of $p$-primary component of $H^i(G,M)$ into a subgroup of $H^i(P,M)$, for any $p$-Sylow subgroup $Psubset G$.



3.
In general, Brown's book "Cohomology of groups" gives a decent overview of what is good to know.

milky way - Is Zeta Reticuli within the Orion Arm?

Let's take a look!



While I can't quite track down the source of this image, I've found it throughout the web, so hopefully it's roughly accurate. I'm pulling it from the Wikipedia article on the Orion Arm.



Sun's location in Orion Arm



In this image you can also see the Orion Nebula sitting (relatively) cozily to the Sun. The distance between the two is believed to be around 1350 light years. Zeta Reticuli, on the other hand, is believed to be a mere 39 light years from Earth. Given the orientation of these two locations and the relative distances, it would be hard to place Zeta Reticuli anywhere but firmly within the Orion Arm.



Not exactly rock solid evidence, but sometimes some back of the envelope deduction will do.

galaxy - What is supposedly in the center of the Milky Way?

Photos of the galactic center aren't too bright because of all the gas and dust between us and it. For example (in infrared):



Galactic center



I'm guessing, though, that you're talking about other galaxies, because there are no views of the galactic center of the Milky Way face-on. Although the galactic center is pretty luminous, just not in the wavelengths we're used to.



Anyway, the galactic center has lots of stars - many massive, but a handful like our Sun. They're young, though, which is odd, as there isn't much star formation happening.



The main attraction, though, is Sagittarius A*, a radio source that is apparently a supermassive black hole. It's actually a subset of the more complex radio source Sagittarius A. So far, astronomers think that most galaxies have supermassive black holes in their centers.



We know that it has to be really massive because of how it perturbs the orbits of stars nearby:



Star orbits near Sagittarius A*



We know it's massive and we know it's a very strong radio source. A supermassive black hole is just about the only thing it could be.



Courtesy of WayfaringStranger: An awesome picture from NASA:



Awesome picture from NASA



In this paper by Ghez et al., the orbits of stars were monitored and a mass of the central object was determined to be $4.1 pm 0.6 times 10^6 M_{odot}$. This paper discusses the properties of the star cluster itself.

Tuesday, 22 October 2013

light - How is it known that Pillars of Creation are destroyed?

This is going to be a short answer, but it should help.



From Wikipedia:




Images taken with the Spitzer Space Telescope uncovered a cloud of hot dust in the vicinity of the Pillars of Creation that one group interpreted to be a shock wave produced by a supernova. The appearance of the cloud suggests a supernova would have destroyed it 6000 years ago. Given the distance of roughly 7000 light years to the Pillars of Creation, this would mean that they have actually already been destroyed, but because of the finite speed of light, this destruction is not yet visible on Earth, but should be visible in about 1000 years. However, this interpretation of the hot dust has been disputed by an astronomer uninvolved in the Spitzer observations, who argues that a supernova should have resulted in stronger radio and x-ray radiation than has been observed, and that winds from massive stars could instead have heated the dust. If this is the case, the Pillars of Creation will undergo a more gradual erosion.




A rough translation:



  • The Spitzer Space Telescope took pictures of a large blob of dust near the PoC.

  • The cloud is the result of a supernova blowing through the PoC and pushing dust outward.

  • Such a supernova would have dissipated the gas and dust in the PoC.

Also, this notes that the SST uses infrared, and as the dust wasn't seen in visible light, we wouldn't have realized from the Hubble photos that there were any effects from a hypothetical supernova.



Note, though, that some dissenters think that something completely different is responsible for the behavior of the dust, and so the PoC may still exist.

Are combinatorial configurations whose Levi graphs may be represented as covering graphs over voltage graphs realizable with pseudolines?

This question is related to this previous question. Many combinatorial configurations have Levi graphs which may be represented as derived graphs obtained from voltage graphs over a cyclic group; in a number of such cases, it is possible to represent the combinatorial configuration as a geometric configuration (i.e., using points and straight lines in the Euclidean plane).



Given a bipartite graph which is obtained from a voltage graph, we can view it as a Levi graph of some combinatorial configuration. Is it possible to draw all such configurations using pseudolines? If not, are there easy/known constraints on the ones that fail? (e.g., if there are more than x points in the configuration, then things work? You can't use such-and-so groups as the cyclic group for the voltage graph?)



(Does the Heawood graph have a voltage-graph representation? If so, it makes the first question easy to answer, but the second one is still interesting. Maybe.)

jupiter - How big would the asteroid belt planet be?

The largest main belt asteroid is 1 Ceres, which alone contains almost a third of the total mass of the whole main asteroid belt.



Ceres is large enough to be in hydrostatic equilibrium, i.e. its own gravity is strong enough to pull it into a roughly spherical shape. Since the mass of a spherical planet scales as the cube of the diameter (assuming constant density), piling all the other main belt asteroids together onto Ceres would only increase its diameter by a bit under 50%. It would still be a roughly similar type of body — a small sphere of partially differentiated rock and ice, with no atmosphere to speak of (as it'd be way too small to hold onto one).



Thus, to answer your question, a hypothetical planet containing all the matter currently making up the main asteroid belt would look pretty much like Ceres already does, just a bit bigger.

Monday, 21 October 2013

alexandrov geometry - Example of non-closed convex hull in a CAT(0) space

There are such examples already in Riemannian world!
In fact in any generic Riemannian manifold of dimension $ge3$ convex hull of 3 points in general position is not closed.
BUT it is hard to make explicit and generic at the same time :)



If it is closed then there are a lot of geodesics lying in its boundary --- that is rare!
To see it do the following exercise first: Show that in generic 3-dimensional manifold, arbitrary smooth convex surface contains no geodesic. (Here geodesic = geodesic in ambient space.)



To make word "generic" more clear: show that any metric admits $C^infty$-perturbation such that above property holds.



Semisolution:
Assume that a geodesic $gamma$ lies in the boundary of a convex set $K$ with smooth boundary. Let $N(t)$ be the outer normal vector to $K$ at $gamma(t)$. Note that $N(t)$ is parallel.
Further note that from convexity of $K$ we get that for any Jacoby field $J(t)$ such that
$$langle N(t_0),J(t_0)ranglele 0 text{and} langle N(t_1),J(t_1)ranglele 0,$$
we have
$$langle N(t),J(t)ranglele 0 text{if} t_0<t<t_1.$$
Note that this condition does not hold if the curvature tensor on $gamma$ is generic.



P.S. Roughly it means that convex hulls in Riemannian world are too complicated. But I know one example where it is used, see Kleiner's An isoperimetric comparison theorem.
But he is only using that Gauss curvature of non-extremal points on the boundary of convex hulls is zero...



Appendix. (A construction of convex hull.) To construct convex hull you can do the following: start with some set $K_0$ and construct a sequence of sets $K_n$ so that $K_{n+1}$ is a union of all geodesics with ends in $K_n$. The union $W$ of all $K_n$ is convex hull. Now assume it coincides with its closure $bar w$. In particular if $xinpartialbar W$ then $xin K_n$ for some $n$. I.e. there is a geodesic in $bar W$ passing through $x$ (if $xnotin K_0$). From convexity, it is clear that such geodesic lies in $partial bar W$...

geometry - Forgetting extra structure inducing Symmetries

This is a major edit of the original post after receiving helpful comments.



It is often the case when one adds additional structure to make a problem more tractable. When one attempts to forget this added structure, this leads to symmetries. One then needs to look at the original solution of the tractable case and modulo cases congruent under symmetry.



The simplest example is when we study free modules. We add a basis to make the problem tractable, reducing to linear algebra. However, when attempting to make basis-free statements of linear spaces and maps, one then must talk about matrices up to similarity, and this is the congruence relation. The change of base are symmetries.



Another example is the use of spectral sequences. The original grading of the graded ring may not be amenable to computation. So we add the structure of a filtration to introduce another grading. Again, one needds to forget this structure if one wants the original grades of the ring.



Do people have other examples of such a situation?

pr.probability - Brownian Approximation of Downswings of Walks with Positive Drift

I'm interested in the downswings of discrete walks w(t) whose steps are IID, bounded, and have positive mean. A simple example might have steps which are +1 with probability 2/3, and -1 with probability 1/3. A downswing of size at least D on [0,t] means 0<=a<b<=t and w(a)-w(b)>D.



Natural questions include the expected size of the largest downswing within [0,t] and the expected minimum b so that there is a downswing of size D ending at b.



One possible approach is to use a Brownian approximation with the same mean and standard deviation. This has the advantage that the distribution of the largest downswing on [0,t] has been studied. The expected time before a downswing of size D is computable and has a simple formula. Asymptotic expressions for the average size of the largest downswing on [0,t] have been computed. See Amrit Pratap's MS thesis.



However, the Brownian approximation has the disadvantage that it is wrong, and sometimes it is wrong by a lot. For example, a walk with only positive steps has no downswings at all.



I'd like to know how bad I should expect the Brownian approximation should be for steps which can be negative, with relatively small positive mean relative to the standard deviation. For example, -1 with probability 4/5, +5 with probability 1/5. I'd like to know if a skew in the positive direction means that large downswings are less common in the discrete walk than in the Brownian approximation.



Any help would be appreciated.

Sunday, 20 October 2013

the sun - Determine north just by one shadow

With only one shadow in a picture, no, you can not.



In order to get North's position from a shadow you need to know where you are and which solar time and date is it there. These are two extra data and you can not disentangle them, even if the picture is perfect.



e.g. suppose a picture in which you see exact measurements of shadow and people, and they are both exactly of the same size. This means Sun is 45º above Horizon. But that may be local noon at Bourdeaux on March 20th (thus shadow pointing North), or local noon at Christchurch on same date (thus shadow pointing South), or local 3PM solar on Quito on March 20th again (thus shadow pointing East).



I chose March 20th because it is easier to visualize where the Sun would be (exactly over the Equator), and these cities due to their latitudes:



  • Bourdeaux: 44º 50' N (almost 45ºN)

  • Christchurch: 43º 31' S (almost 45ºS)

  • Quito: 0º 13' N (almost on Equator)

Saturday, 19 October 2013

orbit - Why does the sun rise north of east between the vernal and autumnal equinox?

You know, the first time someone told me this, I was absolutely certain they were confused, ignorant, or otherwise mistaken, and I told him he had his facts wrong. To be precise, we were talking about an observer on the equator. I maintained that the Sun would rise due east and set due west, all year round. He said no.



What gave me pause, though, is that this person was far from ignorant, so maybe the other two appellations didn't fit either. I went home and tried to construct a convincing proof of my side, and in the middle of the night a very simple and convincing proof did come to mind. Except that it was a proof that I was wrong.



Imagine that you're on equator at sunrise on the summer solstice. Due east is along the equator. But the plane of the equator is tilted relative to the Earth's orbit, and the Sun is well above (north) of it. You cannot see it due east at sunrise. It has to be $23.5^circ$ north of east. (Remember, this is for an observer on the equator.)



By the same token, the Sun must rise $23.5^circ$ south of east on the winter solstice, and gradually transition between these extremes during the year.

Different interpretations of moduli stacks

I'll assure you that you're not crazy. Not only does the idea go through for stacks, but it's impossible (or at least very hard) to make sense of stacks without that idea.



If you're trying to parameterize wigits, you can build a functor F(T)={flat families of wigits over T}. If there is a space M that deserves to be called the moduli space of wigits, it should represent F. It's not just that the points of M must correspond to isomorphism classes of wigits, so we must have F(Spec ℂ)=Hom(Spec ℂ,M). The points are also connected up in the right way. For example, a family of widits over a curve should correspond to a choice of wigit for every point in the curve in a continuous way, so it should correspond to a morphism from the curve to M.



It happens that if wigits have automorphisms, there's no hope of finding a geometric object M so that maps to M are the same thing as flat (read "continuous") families of wigits. The reason is that any geometric object should have the property that maps to it can be determined locally. That is, if U and V cover T, specifying a map from T to M is the same as specifying maps from U and V to M which agree on U∩V. The jargon for this is "representable functors are sheaves." If a wigit X has an automorphism, then you can imagine a family of wigits over a circle so that all the fibers are X, but as you move around the circle, it gets "twisted" by the automorphism (if you want to think purely algebro-geometrically, use a circular chain of ℙ1s instead of a circle). Locally, you have a trivial family of wigits, so the map should correspond to a constant map to the moduli space M, but that would correspond to the trivial family globally, which this isn't. Oh dear!



Instead of giving up hope entirely, the trick is to replace the functor F by a "groupoid-valued functor" (fibered category), so the automorphisms of objects are recorded. Now of course there won't be a space representing F, since any space represents a set-valued functor, but it turns out that this sometimes revives the hope that F is represented by some mythical "geometric object" M in the sense that objects in F(T) (which should correspond to maps to M) can be determined locally. If this is true, we say that "F is a stack" or that "M is a stack." Part of what makes your question tricky is that as things get stacky, the line between M and F becomes more blurred. M isn't really anything other than F. We just call it M and treat it as a geometric object because it satisfies this gluing condition. We usually want M to be more geometric than that; we want it to have a cover (in some precise sense) by an honest space. If it does, then we say "M (or F) is an algebraic stack" and it turns out you can do real geometry on it.

Friday, 18 October 2013

the sun - What is the degree of ionization is the solar photosphere?

I am wondering how many free electrons per baryon and how many free electrons per atom there are in the solar photosphere. This number depends on the abundances of the various atoms found in the photosphere as well as each atom's ionization potential.



The book "Introduction to Stellar Astrophysics" by Boehm-Vitense, vol. 2, 1997 reprint, on page 76 claims that one out of every $10^4$ hydrogen atoms are ionized, so hydrogen is mostly neutral in the photosphere; however, there are other heavier elements with lower ionization potentials. Boehm-Vitense goes through and calculates the fraction of ionization of iron but calculations for only two species present in the photosphere can hardly be considered comprehensive.



Whether you calculate these numbers yourself or provide a summary of a reference (with a link to that reference if possible) that has more details about ionization in the photosphere it will be appreciated.



My guess is that, since hydrogen has such a low degree of ionization, the number of electrons per baryon and per atom will be small, since hydrogen is the majority constituent of the photosphere.

galaxy - How can we predict a Big Crunch when all galaxies are moving further apart?

In a homogeneous and isotropic Universe (even if recent observations challenge this hypothesis), you can derive the Friedmann equations, which describe the evolution of the Hubble constant with time:
$frac{dot{a}}{a} = H(t) = frac{8 pi G}{3}rho - frac{k}{a^2} + frac{Lambda}{3}$ (with $c=1$) (Equation $1$)



where $a=a(t)$ is the scale factor, $dot{a}$ its derivative, $G$ the gravitational constant, $rho$ the matter density, $frac{k}{a^2}$ the spatial curvature (a parameter that describes the metric of the Universe), and $Lambda$ the cosmological constant (an integration constant added by Einstein).
It could be useful to rewrite the equation as:



$H^2 = frac{8 pi G}{3}(rho + rho_{Lambda}) - frac{k}{a^2}$



where $rho_{Lambda} = frac{Lambda}{8 pi G}$ is the "density of cosmological constant".



We can also expand the matter density as $rho = rho_{matter} + rho_{radiation}$.



So we have a "total" density $rho_{tot} = rho_{matter} + rho_{radiation} + rho_{Lambda}$. The destiny of the Universe depends on this amount.



In case of $rho_{tot} > rho_{crit}$, or equivalently a closed Universe ($k=+1$), the equation $(1)$ becomes:



$dot a^2 = frac{8 pi G}{3}rho a^2 -1$



Which points out that the scale factor must have an upper limit $a_{max}$ ($dot a^2$ must be positive).
This in turn means that the second derivative $ddot a$ of the scale factor must be negative, when approaching $a_{max}$, that is the scale factor function inverts its behavior:
enter image description here



Look at here and here if you want to go deeper.



@Bardathehobo This figure shows what I mean when I say that a currently accelerating Universe can still crunch. This is because we are basically ignorant upon the dark energy issue.

Thursday, 17 October 2013

planet - Changes to Earth's orbit

Gravity assists such as this are a form of elastic collision. There's a bit of number crunching here (hopefully no mistakes!), so you'll want to be familiar with the basics of momentum, kinetic energy, and the conservation thereof.




Question: If Ceres (the largest known asteroid and nearly 500 km in diameter) used Earth to perform a gravity assist to increase its own velocity, by how much would this slow the Earth down, and how much larger would Earth's orbit become?




The orbital speed of Earth around the sun is $U = 29.8~mathrm{km~s}^{-1}$. So at a mass of $$M = 5.97times 10^{24}~mathrm{kg},$$



it has a kinetic energy of



$$K = 2.65times 10^{33}~mathrm{J}$$ and momentum $$P = 1.78times 10^{29}~mathrm{kg~m~s^{-1}}.$$



So let's say Ceres is performing a gravitational slingshot as in the simple diagram below. Ceres has a mass $m = 9.47 times 10^{20}~mathrm{kg}$. It approaches Earth at velocity $v$, and after the slingshot its final velocity is (up to, for a low-mass object) a velocity of $2times U+v$.



enter image description here



The total momentum of the system must be conserved. Ceres has changed direction and thus gained a significant amount of momentum in the leftwards direction: the same momentum that Earth must then lose. Kinetic energy is also conserved. So, we have a system of equations, where the subscripts i and f are initial and final momenta and velocities. M and U are the mass and velocity of Earth, m and v are that of Ceres.



$$MU_i^2 + mv_i^2 = MU_f^2 + mv_f^2$$



which says that the sum of the initial kinetic energies of the two objects must equal the sum of the final kinetic energy. We also have conservation of momentum:



$$MU_i + mvec{v}_i = MU_f + mvec{v}_f $$



Solving these equations, the solution is



$$v_f = frac{(1-m/M)v_i + 2U_i}{1-m/M} $$



If Ceres approached Earth at $v_i = 30~mathrm{km~s}^{-1}$, I get a solution of $v_f = 89.6~mathrm{km~s}^{-1}$ - even for such a massive object, the $v_f approx 2U+v$ approximation is extremely good. This means that Ceres' velocity has nearly been tripled by the gravity assist.



So, the final momentum of Earth is



$$MU_f = MU_i - mv_i - mv_f = 1.78 times 10^{29}~ mathrm{kg~m~s^{-1}} $$




In fact, Earth's linear momentum will only decrease by $mv_i + mv_f =
1.13 times 10^{23} ~mathrm{kg~m~s^{-1}}$. From this change in momentum and Earth's mass, we find its orbital velocity decreases by
$0.019~mathrm{m~s}^{-1}$.



Approximating a circular orbit (using $r=GM_{sun} / v^2$), Earth's
orbit widens by 190 km. Sounds like a lot, but bear in mind that's 190
km out of 150 million!




Ceres is many orders of magnitude larger than any satellite that we could launch. So we could never practically use spacecraft to change our orbit significantly, and even an enormous near-miss asteroid would be of little consequence. But, it hasn't stopped some from trying!

Wednesday, 16 October 2013

st.statistics - measure of quality of curve fit

If you use the least square fit, the second case may have a better conditioned matrix but this measure may be hard to compute in practice. Still, it is going to be something in this venue because the story is not about the second curve being a better approximation than the first but about its being "more unique", so to say, and that is exactly what the condition number measures for solutions of linear systems.



Of course, since we are talking about approximate solutions, not exact ones, we may want to modify the notion of the condition number a bit. One possible quantity that seems relevant to the "approximate uniqueness" is the following: the least square problem is just about minimization of a quadratic form $Q(x)$ and if $y$ is the solution, then $Q(x)=Q(y)+(A(x-y),(x-y))$. Now, we want to see what is the penalty for going away from the optimal vector. So, both $frac{mbox{Tr,}A}{Q(y)}$ and $frac{mu(A)}{Q(y)}$ where $mu(A)$ is the least eigenvalue of $A$ seem to make sense as measures of such penalty. The higher is this number, the more unique is the approximation. The reason for the denominator is that I wanted to measure the sizes of deviations that change the minimum by certain percentage. You may want to do the absolute error instead, or something else. It may be a good idea to figure out what invariance properties you want from your measure first. For instance, should it be invariant with respect to stretchings or you think that two close points determine a line less precisely than two distant points?

How to build a powerful home made telescope

You can buy remarkably cheap reflecting telescopes these days on a mount called the Dobsonian mount. They don't (usually) have fancy GoTo automated pointing and tracking, but give fantastic bang for buck as the majority of their cost is invested into the biggest primary mirror possible.



Building a telescope would be a great hobby and learning experience, and there are many books which will show you how. Just don't expect to save a significant sum of money over a commercially built scope - it could even be more expensive to build your own.



Fancy GoTo tracking mounts are not usually worth the expense and hassle (in setting them up) for a first-timer, though the Orion Intelliscope series is an intriguing compromise. For an extra $100 or so, the telescope has no guiding motors, but tracks where you push the telescope and a hand controller will tell you how much further you have to go till you reach your target.



My recommendation? Get the biggest Dobsonian mount scope you can afford. With a mirror greater than 10" things begin to get large and unwieldy, so 10 inches (mirror diameter, not tube length) is my recommended maximum size for a first telescope. Just go for the biggest aperture you can buy and easily carry. Don't invest much in GoTo systems, as although they can be quite useful they are also a pain to set up if you don't have the scope permanently mounted.

Tuesday, 15 October 2013

linear algebra - Characterizing invertible matrices with {0,1} entries

You are unlikely to find a characterization which does not result from simple facts in linear algebra. I am unaware of any characterizations which make interesting statements about graphs.



You may want to choose the ring over which the matrices belong. For example, the same matrix may be invertible over the reals, but if it has even determinant, then it is not invertible over the field of two elements. You can say that, given a matrix A, the parallelipiped associated with the rows (or columns) of A is nontrivial (has nonzero volume in R^n) iff the matrix is invertible over the reals, but this is a simple consequence of a geometric interpretation of determinant; it doesn't give anything new.
Also, the eigenvalues of the adjacency matrix of a directed graph are all nonzero precisely when said matrix is invertible over the reals; big hairy tautological deal, as I am just saying a matrix is invertible when its determinant is nonzero.



Consider the ring above fixed, and look at the {0,1}-matrices over that ring which are invertible. This set includes some lower triangular matrices, some upper triangular matrices, some "comb matrices" where you take an invertible matrix and alternately add
an extra row and column, picking one of them to be mostly zeros and the other mostly 1's, while making the diagonal all 1's, and alternating between rows and columns. In addition to these patterns, you have some block matrices, incidence geometries, certain combinatorial designs, and so on, all belonging to the class of invertible {0,1}- matrices, and looking pretty woolly as a set. The attendant directed graphs will be a similarly woolly-looking set of graphs.



Given the above, it may be possible to describe the class of graphs in an interesting way. For example, if you build the matrices by augmentation, consider the corresponding operation for adding a vertex and certain edges to a graph. You may be able to prove facts about the set of graphs so constructed, especially as a set of representatives of isomorphism classes of graphs. I just don't think the result will look pretty, appealing, or useful without a major shift in perspective.



Gerhard "Ask Me About System Design" Paseman, 2010.04.06

positional astronomy - Real-time position of a distant celestial body

A fine place for reliable information on the location of the vast majority of cataloged stars is the Simbad Astronomical Database. For example, here is the data for Betelgeuse. As you will see, the first line after "other object types" is the position given in the ICRS coordinate system (the first set of numbers is Right Ascension, in degrees minutes seconds and the second set of number is Declination in degrees minutes seconds). For all but the highest level research-grade astrometry or proper-motion purposes, this position will be more than sufficient.



Now, given you have the 'global' position of the star, you can calculate its relative position as seen from wherever you are on Earth. Doing so is a function of your latitude, longitude and current time. There are online calculators that can do this such as this one, or you can make a program to crunch through the math yourself (it's not exactly trivial, but if there are lots of stars you want to do it for, or you want to do it on the fly, this is a better bet). If you really want to be clever about it and you have lots of stars, you only have to do the calculation once, since all the stars are essentially fixed relative to one another. When all is said and done, the most intuitive local coordinate system to use for your projecting would be altitude & azimuth.



If you are concerned about also including the relative motions of stars over time, I would say don't be. Nearly all stars have proper motions of less than 1 arcsecond a year, which means it would be half a century before most stars have moved in even the most remotely perceptible way to the naked eye (the human eye has an angular resolution of roughly one arcminute). Other apparent motions, such as parallax, are also negligibly small for objects outside of our solar system (again, unless you are doing research-grade work).

Monday, 14 October 2013

nt.number theory - What is interesting/useful about big Witt Vectors?


I am particularly interested in knowing their original motivation and applications.




Kummer theory says that every degree-$n$ cyclic extension $L|k$ of any field $k$ containing a primitive $n$-th root $zeta$ of $1$ is of the form $L=k(root nof D)$ for some order-$n$ cyclic subgroup $Dsubset k^times/k^{times n}$, and conversely.



(Something can be said even when $zetanotin k$ but when $n$ is invertible in $k$. Look up a certain exercise in Schoof's book on Catalan's Conjecture.)



This leaves out degree-$p$ cyclic extensions $L|k$ of a characteristic-$p$ field. Artin-Schreier proved that $L=k(wp^{-1}(D))$ for some ${mathbb F}_p$-line $Dsubset k/wp(k)$, where $wp:kto k$ is the endomorphism $xmapsto x^p-x$ of the additive group $k$, and conversely.



What about degree-$p^m$ cyclic extensions $L|k$ of a characteristic-$p$ field ? Many complicated constructions for particular cases were given in the 1930s (by people such as Albert) before Witt introduced the ring $W_m(k)$ of $p$-typical Witt vectors of length $m$ and the endomorphism $wp:W_m(k)to W_m(k)$ of the additive group, and proved that $L=k(wp^{-1}(D))$ for some order-$p^m$ cyclic subgroup $Dsubset W_m(k)/wp(W_m(k))$, and conversely.



There were many other papers in the same volume of Crelle 176 (1937) applying Witt vectors to other outstanding problems. My favourite is Hasse's characterisation of those $alpha$ in a finite extension $K$ of ${mathbb Q}_p$ containing a primitive $p^m$-th root of $1$ for which the extension $K(root p^mofalpha)|K$ is unramified ($p^m$-primary numbers; see for example the book by Fesenko and Vostokov, freely available on Fesenko's homepage).



See also Harder, Wittvektoren,
Jahresber. Deutsch. Math.-Verein. 99 (1997), no. 1, 18--48
.



An English translation of this paper has appeared in Ernst Witt, Gesammelte Abhandlungen, Springer, Berlin, 1996.

Why did NASA intentionally crash the Lunar Atmosphere and Dust Environment Explorer (LADEE) on the moon?

LADEE was going to run out of fuel, and they wanted - and succeeded - to collect data of Moon's atmosphere and levitated dust.
(LADEE means 'Lunar Atmosphere and Dust Environment Explorer'.)



Fuel is needed to compensate orbital degradation due to Moon's non-symmetric gravitational field.



Online communication was possible due to superfast laser data transmission (Lunar Laser Communication Demonstration, or LLCD).



More details here and here.

Sunday, 13 October 2013

homotopy theory - What are the fibrant objects in the injective model structure?

I think you can find an answer in my question about global fibrations of simplicial sheaves: global fibrations of simplicial sheaves .



There, Andreas Holmstrom pointed me to Voevodsky's preprint "Homotopy theory of simplicial presheaves in completely decomposable topologies", http://front.math.ucdavis.edu/0805.4578 , where I discovered lemma 4.1 , which I think aswers your question.



Although it is not proved, Voevodsky says that it is straightforward. I could manage myself to write a proof of it, at least under the hypothesis of Brown-Gernstern's "Algebraic K-theory as generalized sheaf cohomology", in LNM 341/1973, that is, with sheaves defined on a Noetherian space of finite Krull dimension.



In this situation at least, fibrations are global fibrations. That is, a morphism of sheaves $p: E longrightarrow B$ is a global fibration if and only if if for every inclusion of open sets $Usubset V$ the natural map $E(V) longrightarrow B(V) times_ {B(U)} E(U)$ is a (Kan) fibration of simplicial sets.



As a corollary, if you take $B = *$, this condition tells you that fibrant objects are those for which each restriction map $E(V) longrightarrow E(U)$ is a Kan fibration. In particular, put $U=emptyset$ and this implies that each $E(V)$ must be a Kan complex.



All this with Brown-Gersten's hypothesis, but Voevodsky doesn't seem to need them, so maybe it is also true in your situation.

open problem - The importance of Poincare Conjecture or SPC4?

Mine is no profesional, and certainly don't believe its new nor own, but I'll give it a try.



In my opinion, algebraic topology tries to caracterize nice topological spaces (say CW complexes) modulo homotopy equivalence (which is the reasonable equivalence given the fact that the invariants used are usually invariants under homotopy equivalence). This caracterization has a good important theorem (for me) which is Whitehead's theorem.



When studying manifold topology, one would like to get clasification modulo homeomorphisms so, the above study it is not enough. This gives great importance to theorems such as the clasification of spheres by homotopy type.



I think it is interesting that the result from this point of view had been solved for tori (which are much simpler from the point of view of higher homotopy groups) by Hsiang and Wall (and maybe others).



In brief, I believe that Poincare Conjecture it is one of the central and most natural questions one can pose in manifold topology (or geometric topology?).

career - Advice on doing mathematical research

Terry Tao has some helpful ideas.



I can't say I've really mastered the practice of choosing problems. I guess if there's an area of math that you're really familiar with, questions will just start coming to you, but I can't tell how much of that ability comes from some osmotic process from the examples of peers.



When I'm stuck, I try to write down where I am in a lot of detail, then I take a break. If I'm lucky, I know what sort of knowledge might be able to crack my problem, and if I'm even luckier, I know who has that knowledge.

Terminology: Algebras where long strings of products are 0?

It seems to me that Jordan and Dotsenko are giving different answers from one another, and I agree with Dotsenko’s.  The condition Thurston has stated is the definition of $A_{+}$ being nilpotent.  “Locally nilpotent” is a weaker condition.  There are many examples of nonunital rings $A_{+}$ that are locally nilpotent (meaning for any finite set $a_1, ldots, a_k in A_{+}$ there exists $n in mathbb{N}$ such that $a_{i_1} cdots a_{i_n} = 0$ provided every $i_j in lbrace 1, ldots, krbrace$) but not nilpotent (meaning there exists $n in mathbb{N}$ such that $a_1 cdots a_n = 0$ provided every $a_i in A_{+}$, which is the condition Thurston stated).  Even better: two nice (and quite different) examples of a locally nilpotent prime nonunital ring can be found in E. I. Zelmanov, “An example of a finitely generated primitive ring,” Sibirsk. Mat. Zh. 20 (1979), no. 2, 423, 461, and J. Ram, “On the semisimplicity of skew polynomial rings,” Proc. Amer. Math. Soc. 90 (1984), no. 3, 347–351.  (Of course, if one merely wants an example where $A_{+}$ is locally nilpotent but not nilpotent—and so does not satisfy Thurston’s condition—one could take something like $A_{+} = bigoplus_{i=2}^{infty} 2mathbb{Z}/2^imathbb{Z}$.)



N.B. Mathematical Reviews incorrectly lists the title of Zelmanov’s paper as “An example of a finitely generated primary ring.”  It’s listed correctly in Zentralblatt.  Possibly the problem lies in the translation from the original Russian; the condition primitive in the English translation of the paper (Siberian Math. J. 20 (1979), no. 2, 303–304) is what we would today call prime.

Saturday, 12 October 2013

nt.number theory - addition of definable numbers decidable?

Yes and no. The sum of two computable numbers is always computable, but there is no uniform way to find a decimal expansion for the sum.



Suppose you do have an algorithm for adding any two numbers. Consider adding the numbers .2222... and .7777... Your algorithm should output the first digit of the result after reading only finitely many digits of each number. That digit could be 0 or 9, depending on whether the algorithm thinks the sum is 1.0000... or .9999... Let's say the algorithm never read past the n-th digit of either number. Then the algorithm will return the same answer on any input which agrees with .2222... and .7777... up to the n-th digit. Then, it is easy to change the (n+1)-th digit of each number so that the answer is incorrect for the new numbers. (If the answer was 0, change both (n+1)-th digits to 0; if the answer was 9, change both (n+1)-th digits to 9.)



On the other hand, this kind of problem only arises when the sum could be a number with two distinct decimal representations such as 1.000... = .999... If you know in advance that the result is not of this form, then your algorithm can simply wait until it has read enough digits to decide between 0 and 9. This will always work, provided that you have that additional information so you know you won't wait infinitely long. When the answer does have two decimal representations, it is trivial to come up with a program to write that number. So you can always come up with a machine that will output the desired sum, but there is no way to computable way to detect between the two instances.



In your question, you give a little more than just the decimal presentation, you also give the programs that generate each such presentations. However, the problem persists even with this extra information. Take an inseparable pair of computably enumerable sets V and W. For each n, let An and Bn be the machines that start to output .222... and .777... until such stage s where n enters W or V. If n enters V, then machine An starts outputing 3's instead of 2's after the s-th digit while machine Bn keeps outputing 7's as usual; if n enters W then machine Bn starts outputing 6's instead of 7's after the s-th digit, but An keeps outputing 2's as usual. If there were a uniform way to compute the sum of the outputs of An and Bn, then the first bit of the sum could be used to separate V and W.



For this reason, it is preferable to use a different representation of computable numbers. What is most commonly used are rapidly convergent Cauchy sequences of rationals. There are various ways to formalize these. A common one is to use extended binary representations where the bits -1,0,1 are allowed. Another (very uncommon) one is to use extended decimal representations where the digits 0,1,2,...,9, and 10 are allowed. This fixes the above problem since you can't go wrong by returning the first digit 10 as the answer to the sum of .2222... and .7777... This technique is known as "using nails" in practical implementations of high-precision arithmetic.

Thursday, 10 October 2013

arithmetic geometry - Two conjectures by Gabber on Brauer and Picard groups

In a paper I need to make reference to 2 conjectures by Gabber



(see Conjectures 2 and 3, page 1975)



http://www.mfo.de/programme/schedule/2004/32/OWR_2004_37.pdf



1) Let $R$ be a strictly henselian complete intersection noetherian
local ring of dimension at least 4. Then $Br'(U_R) = 0$ (the cohomological Brauer group of the punctured spectrum is $0$).



2) Let $R$ be a complete intersection noetherian local ring of dimension
3. Then $Pic(U_R)$ is torsion-free.



Does anyone know of any new developments on these conjectures beyond the Oberwolfach report above? I tried MathScinet but could not find anything. May be someone in the Arithmetic Geometry community happen to know some news on these? Thanks a bunch.

ct.category theory - groups as categories and their natural transformations

The comments thread is getting a bit long, so here's an answer. The category $C(G)$ that David associates to a group $G$ (by his second recipe) has the elements of $G$ as its objects, and exactly one morphism between any given pair of objects. It's what category theorists call an indiscrete or codiscrete category, and graph theorists call a complete graph or clique. You can form the indiscrete category on any set: it doesn't need a group structure.



A functor from one indiscrete category to another is simply a function between their sets of underlying objects. In particular, given groups $G$ and $H$, a functor from $C(G)$ to $C(H)$ is simply a function from $G$ to $H$. That's any function (map of sets) whatsoever -- it completely ignores the group structure.



Given indiscrete categories $C$ and $D$ and functors $P, Q: C to D$, there is always exactly one natural transformation from $P$ to $Q$. In particular, given groups $G$ and $H$ and functors $P, Q: C(G) to C(H)$, there is always exactly one natural transformation from $P$ to $Q$.

Wednesday, 9 October 2013

Enumeration of graphs arising in invariant theory

I've been working on a talk based on some stuff in Olver's "Classical Invariant Theory" book and have been wondering about a related graph enumeration problem.



Start with a triple $(n,v,e)$ of natural numbers. Take all $mathbb{Q}$-linear combinations of directed graphs (allowing multiple edges, but no loops) with $v$ vertices, $e$ edges, and each vertex has at most $n$ edges going to it or coming from it. Now, take three relations (images scanned from Olver)



Rule 1: alt text



Rule 2: alt text



Rule 3: alt text



Where the function $v$ with a vertex as subscript next to a graph means that graph multiplied by $n$ minus the number of edges attached to that vertex. (So, for instance, an isolated vertex gets multiplied by $n$)



Denote the space after quotienting by these relations by $V_{n,v,e}$. And so, in final form, my question:




What is $dim V_{n,v,e}$? Or at least, can we find relatively effective upper bounds?




EDIT: Some clarifications. The colorings on the vertices are just to mark them in the pictures to keep track of where everything goes, the graphs are not marked themselves. Additionally, as Rule 2 is slightly unclear from the scan, the $v$ function is always the vertex not attached to the arrow in the configuration.