Wednesday, 30 September 2015

nt.number theory - Fields of definition for p-adic overconvergent modular eigenforms

If we consider the action of the $U_p$ operator on overconvergent $p$-adic modular forms, then we can get some information about the field over which the eigenforms are defined by looking at the slopes. For instance, my paper in Math Research Letters (MR2106238) proves that the slopes of $U_2$ acting on 2-adic overconvergent modular forms of level 4 with primitive Dirichlet character are distinct, so the field of definition has to be $mathbf{Q}_2$. However, there are cases when the slopes fail to be distinct; for instance, in Emerton's thesis it is proved that the lowest slopes of T_2 acting on level 1 forms of weight congruent to 14 modulo 16 are 6 and 6.



For classical modular forms of level 1, we have Maeda's Conjecture which says that the field of definition is essentially as large as it can be; the Hecke polynomial is irreducible with Galois group $S_n$ where $n$ is the dimension. However, there is no reason that this should be true for overconvergent modular forms, and in fact it isn't. Discussions with Robert Coleman led me to the concrete example of 2-adic overconvergent modular forms of tame level 1 and weight 142, where there are two eigenforms of slope 6 which are both defined over the ground field $mathbf{Q}_2$.



The question is, what should one expect here? Can one tell any more about the field of definition from the slopes than the absolute minimum?

ct.category theory - What would be an "arrows-only" defintion of a product in a category?

Using the notation of nlab, the following is a fibered product: if $x, y$ are arrows with $t(x) = t(y)$, then their fiber product is the pair of arrows $u, v$ with $t(u) = s(x)$, $t(v) = s(y)$, and $s(u) = s(v)$ such that for any pair of arrows $a, b$ with $s(a) = s(b)$, $t(a) = t(u) , (= s(x))$ and $t(b) = t(v) , (= s(y))$ and such that $x circ a = y circ b$, there is a unique arrow $c$ having $s(c) = s(a) = s(b)$, $t(c) = s(u) = s(v)$, and $a = u circ c$, $b = v circ c$.



To define a plain product, suppose the category has a final object (that is, of course, that there exists an arrow $f$ such that for any arrow $x$ there exists a unique arrow $x'$ with $s(x') = s(x)$ and $t(x') = f$) and replace $x$ and $y$ by $x'$ and $y'$ in the above. If it doesn't have one, of course you can just add one.



Can't get any farther away from identity arrows than that; you need to be able to specify sources and targets to define composition.

hochschild homology - Deligne's conjecture (the little discs operad one)

As a starting point, consider this brief explanation by John Baez about actions of the little disks operad on loop spaces:



http://math.ucr.edu/home/baez/week220.html



Now consider the figure on page 35 of these slides -- http://canyon23.net/math/talks/GTC%20200905b.pdf



Up to homotopy, the little disks space is equivalent to the "big bigons" space. The outer bigon is almost entirely filled by the inner bigons (so they are as big as they can be). We can think of this as describing a sequence of operations (one for each inner bigon) which transforms the lower half of the outer bigon into the upper half of the outer bigon. i.e. cut out the lower boundary of a lowest inner bigon and replace it with the upper half of that inner bigon, and so on for each inner bigon.



We think of Hochschild cochains as a derived Hom from the regular bimodule to itself. If each inner bigon is labeled by a Hochschild cochain, then composing these elements of various Hom spaces, in a manner tracking the topological operations in the previous paragraph, gives a Hochschild cochain associated to the outer bigon.



So far we have described an action of 0-chains (single points) of the big bigons operad to Hochschild cochains. If we two points in the operad connected by an arc, then the maps associated to the endpoints are not equal, but they are chain homotopic via a homotopy determined by the arc. And so on for k-chains in the operad.



I'm not sure how hard it would be to turn the above ideas into a proof for the usual Hochschild cochain complex, but for the homotopy equivalent blob complex one can give a proof along these lines. This proof (for the blob complex) generalizes to higher dimensions, where we replace the boundary of a bigon (two intervals) with any two n-manifolds glued along their boundary. Actions of the little n-cubes operad come from the special case where all the n-manifolds are n-balls.

examples - Ternary relations that are not binary functions

This may be too specific for what you're going for, but this question immediately brought to mind to triality on $D_4$. Here's triality in a nutshell:



View $mathbb{R}^8 = mathbb{O}$ as the inner product space collection of all Cayley numbers (where ${1,i,j,k,l,li,lj,lk}$ forms an orthonormal basis.). The collection of all orientation preserving isometries of $mathbb{O}$ is simply $SO(8)$ (whose Lie algebra is $D_4$).



For $A$, $B$, and $C$ in $SO(8)$ and $x$ and $y$ in $mathbb{O}$, consider the equation $$A(x)B(y) = C(xy)$$



Triality is the following claim: Given $A$, there exists a $B$ and $C$ making this equation hold for all Cayley numbers. The choice of the pair (B,C) is ALMOST unique - the only ambiguity is in replacing $(B,C)$ with $(-B,-C)$. Likewise, given $B$ or $C$, the other two matrices exist and are unique up to simultaneously changing the sign of both.



This gives rise to a natural relation $Rsubseteq SO(8)times SO(8)times SO(8)$ with $(A,B,C)in R$ iff $A(x)B(y) = C(xy)$ for all Cayley numbers $x$ and $y$. The ambiguity in sign shows that this is not a 2-ary function.

Tuesday, 29 September 2015

gn.general topology - Hausdorff Derived Series

The point of this section of Stroppel's book is to show, ultimately, that nothing new happens. Stroppel shows that each term in the Hausdorff derived series is nothing other than the closure of the same term in the usual derived series. A topological group is Hausdorff-solvable if and only if it is solvable, and the solvable height equals the Hausdorff-solvable height. In a sense, you can't construct interesting examples. :-)



One thing that you can do is make an example of a topological group whose commutator subgroup isn't closed. I cheated with Google to find this, but here goes anyway. There exists a finite group $G_n$ which is 2-step nilpotent and such that the commutator subgroup requires a product of $n$ commutators. Namely, take a central extension of a $2n$-dimensional vector space $V$ over an odd finite field by its exterior square $Lambda^2 V$, such that the commutator of $a,b in V$ is $a wedge b in Lambda^2 V$. The point is that you need $k$ commutators to reach a tensor in $Lambda^2 V$ of rank $k$. Now let $G$ be the product of all $G_n$ in the product topology. The algebraic commutator subgroup of $G$ isn't closed, because it does not include elements in the closed commutator subgroup whose commutator length in $G_n$ is unbounded as $n to infty$. Amazingly, this group $G$ is even compact.



The implication for a Galois algebraic field extension, say, is as follows. The Galois group $G$ of such a field extension is a topological group, in fact a profinite group. You might have wondered if the algebraic field extension is "solvable" in the group-theoretic sense, but without leading to solvability by radicals. Happily, it doesn't happen, because what you should do is replace the solvable series of $G$ by the closed solvable series.

Does there exist a general theory of "arithmetic complexity"/"arithmetic height"?

Every answer before points You into algebraic complexity theory which is very fashionable today. I would like to try to point in some other direction, which probably is not what You are looking for, but is in my opinion interesting. I have hope that other than usual look is interesting and is not just misinterpretation of Your question.




Say I'm given some finite precision
complex number, which I'm told is
algebraic over mathbb{Q}. Is there
some well defined notion of arithmetic
complexity which can allow me to
deduce exactly what algebraic number
this finite precision number
represents?




Finite precision number is rational number. Then it has finite representation as continued fraction or as decimal number with finite number of digits. Say last digit is $a_n$ which means that we know Your number as $A=...a_0 + a_1/10 + a_2/10^2+... + a_n/10^n$ Then as You may see from decimal representation such number represents whole interval $I=[A,A+(a_n+1)/10^n]$. So as You ask for algebraic numbers, it may represent every algebraic number within this interval.



The other way to look at is is to use Stern-Brocot tree which is by means of continued fraction $A=[b_0;b_1,b_2,...,b_k]$ for some k. Then number A represents the whole tree when A is a root of it, and every branch of this tree, finite or infinite represents some number. It is interesting, that there You may say that every possible subtree with root in A may be represented by sequences of turns like $LLRRLLLLRLRLLR...$ (XX) on the tree, where R means right, and L means left, finite or infinite. There are interesting relation to Minkowski question mark function and Cantor function.



Then if I understand right, it is possible to reformulate Your question and say that You are asking for algebraic numbers represented within given interval or represented by certain sequences within giver family of sequences like (XX).



As You may see subtree of Stern-Brockot tree is isomorphic to the whole tree, and it is obvious that within given interval You may find infinite number of algebraic numbers. So in fact if You will see any pattern for algebraic number in given interval ( given with some procession) then You may reformulate it as property of whole set of algebraic numbers. And up to Your question: within given interval there is infinite subset of algebraic numbers with any computational complexity. That is because within subtree of Stern-Brockot tree there are presented sequences of any Kolmogorov measure You may think of. Stating finite precision, in fact You give some very beginning sequence in a tree, but not the infinite possible rest.



Interesting case is to ask if there exists any patterns for S-B tree or for any other (decimal, hexadecimal etc) representation for a numbers which may give us insight into property of numbers ( there are some, such simple as divisibility by 2 for example or much more complicated). For example is there any property in any representation ( possibly effective, not straight from the definition of algebraic number) which allows us to say that given number is algebraic, or algebraic of specified kind? As far as I know there is no pattern in continued fractions coefficients for general algebraic number, although, You may find pattern in numbers which are roots of quadratic diofantine equations. So Your question in general has no answer,but there are some patterns in known representations of numbers which gives us some classes of algebraic numbers: for example quadratic one. Maybe someday someone will find other patterns/representations which provide us other possibilities...

Monday, 28 September 2015

Ramsey pairs of classes graphs

Motivation



Call a set $H$ of vertices of a graph $G$ homogeneous if it is either independent or complete.
Every perfect graph of size $n$ has a homogeneous subset of size $sqrt n$.



Noga Alon has shown that for every finite graph $G$ there is a finite graph $F$
of some size $n$ such that every induced subgraph of $F$ of size at least $sqrt n$ contains an induced copy of $G$.



This implies the following: Let $G$ be any finite graph. Then there is a graph
$F=(V,E)$ such that whenever $E'$ is another set of edges on the vertex set $V$ such that
$(V,E')$ is perfect, then there is a set $Hsubseteq V$ that is homogeneous with respect to $(V,E')$ such that the induced subgraph of $F$ on $H$ is isomorphic to $G$.



Let $C$ and $D$ be classes of finite graphs (closed under isomorphism). We say that $(C,D)$ is a Ramsey pair if the following holds: For every $Gin C$ there is $Fin C$ such that whenever $E'$ is another set of edges on the vertex set $V$ of $F$ such that $(V,E')in D$,
then there is a set $Hsubseteq F$ that is homogeneous wrt $(V,E')$ such that the induced subgraph of $F$ on $H$ is isomorphic to $G$.



By the remark above, if $C$ is the class of all graphs and $D$ is the class of perfect graphs, then $(C,D)$ is a Ramsey pair. I know that the notion of a Ramsey triple,
where $G$ and $F$ come from different classes, is probably more natural, but the Ramsey
pairs did come up in some combinatorics related to set-theoretic forcing.
In some sense, $(C,D)$ being a Ramsey pair says that $C$ is substantially more complicated than $D$. In particular, if $C$ and $D$ are the same and nontrivial in the sense that they don't just consist of complete and empty graphs, then $(C,D)$ is not a Ramsey pair.



Here is the question




Can anyone come up with another nontrivial Ramsey pair?


textbook recommendation - Books about polynomials

McKee and Smyth, eds., Number Theory and Polynomials, being the proceedings of a workshop held at Bristol University, 3-7 April 2006.



Langevin and Waldschmidt, eds., Cinquante Ans de Polynomes - Fifty Years of Polynomials, proceedings of a conference held in Paris, 26-27 May 1988,

ag.algebraic geometry - What do you lose when passing to the motive?

Here the word "motive" will stand for Grothendieck pure motives modulo rational equivalence. Your point 1. is also true for Grassmann bundles. More precisely the following result holds :



Let $Elongrightarrow X$ be a vector bundle of rank $n$, $kleq n$ and $Gr_k(E)longrightarrow X$ the associated Grassmann bundle. Then $M(Gr_k(E))simeq coprod_{lambda}M(X)[k(n-k)-lambda]$, where $lambda$ runs through all partitions $lambda=(lambda_1,...,lambda_k)$ satisfying $n-kgeq lambda_1geq...geq lambda_kgeq 0$.



You can prove it in the same fashion as for the projective bundle theorem, as an application Yoneda type lemma for Chow groups.



We now know many things on the motives of quadrics. For example if a quadratic form $q$ is isotropic, the motive of the associated quadric $Q$ has a decomposition $mathbb{Z} oplus M(Q_1) oplus mathbb{Z}[dim(Q)]$, where $Q_1$ is a quadric of dimension $dim(Q)-2$ associated to a quadratic form $q_1$ Witt equivalent to $q$. Using it inductively you get the motivic decomposition of split quadrics and for example if $dim(q)$ is odd and $q$ is split the motive of $Q$ is $mathbb{Z}oplus mathbb{Z}[1]oplus ... oplus mathbb{Z}[dim(Q)]$. Another very important result is the Rost nilpotence theorem, which asserts that the kernel of the change of field functor on Chow groups of quadrics consists of nilpotents. This result is very fruitfull because it implies that the study of the motive of quadrics can be done over a field which splits the quadric, working with rational cycles in stead of cycles over the base field. Even though these motivic results give severe restrictions on the higher Witt indices of quadrics and have very important applications, the motive does not contain "everything" about the associated quadratic forms (even in terms of higher Witt indices).



Another interesting class of varieties to motivic computations are the cellular spaces, i.e. schemes $X$ endowed with a filtration by closed subschemes $emptyset subset X_0subset ... subset X_n= X$ and affine bundles $X_isetminus X_{i-1}rightarrow Y_i$. In this situation the motive of $X$ is isomorphic to the direct sum of (shifts) of the motives of the $Y_i$. For example the filtration of $mathbb{P}^n$ given by $X_i=mathbb{P}^i$ and those affine bundles are given by the structural morphism of $mathbb{A}^{i}$ imply the motivic decomposition $M(mathbb{P}^n)=mathbb{Z}oplus ... oplus mathbb{Z}[n]$, and as you can see this is the same motive as odd dimensional split quadrics, so you certainely loose information.



The situation is much more complicated replacing quadratic forms by projective homogeneous varieties, but still under some assumption you can recover some results such as Rost nilpotence theorem, and we now begin to have a good description of their motive. Under these assumption the motive of projective homogeneous varieties encodes informations about the underlying variety, such as the canonical dimension, with the example of the computation of those of generalized Severi-Brauer varieties. Some works have also been done to link motives in this case with the higher Tits indices of the underlying algebraic groups.



Just to cite a few mathematicians from who we owe these great results : V. Chernousov, N. Karpenko, A. Merkurjev, V.Petrov, M. Rost, N.Semenov, A. Vishik, K. Zainoulline and probably many others that i forgot to mention.



edit : to add more precision to the nice answers of Mr. Chandan Singh Dalawat and Mr. Evgeny Shinder, motives of (usual) Severi-Brauer varieties of split algebras are indeed the same as projective space and split quadrics (in odd dimension) but it is obvious that on the base field they're are not necessarily isomorphic since the Severi-Brauer variety is totally split as long as there is a rational point, whereas an isotropic quadratic form is not completely split.

intuition - Abstract nonsense versions of "combinatorial" group theory questions

Sylow subgroups are an example of a type of object satisfying a sort of universal property. Exploring other objects with similar properties gave birth to the modern theory of finite soluble groups in the 1960s.



If X is a class of finite groups and G is a finite group, then P is an X-covering subgroup of G if P is in X, and whenever P ≤ H ≤ G, N ⊴ H, and H/N in X, then PN=H. In other words, P covers every X-factor of G. If X is the class of finite p-groups, then X-covering subgroups of G and Sylow p-subgroups of G are the same thing. Indeed, if P is an X-covering group, and if H is a Sylow p-subgroup containing P and N=1, then we must have H=P. If P is a Sylow p-subgroup and H contains P with N ⊴ H and [H:N] a power of p, then [H:NP] is a divisor of [H:N] and [H:P], so must be 1. Notice how the "containment" part of the Sylow theorems is replaced with a "covering" condition that behaves better with the normal structure of the group.



If G is a finite solvable group and X is the class of nilpotent groups, then there is a sort of "Sylow nilpotent subgroup", the X-covering groups or Carter subgroups. They were studied by R.W. Carter who described them as self-normalizing nilpotent subgroups. Like Sylow p-subgroups, there is exactly one conjugacy class of Carter subgroups, and they have some reasonable arithmetic properties. People tried to determine which classes X of groups are such that X-covering groups exist and are unique up to conjugacy. Roughly speaking, this was the dawn of the modern theory of finite soluble groups, with Gaschütz's (et al.) classification of such X as "saturated formations".



This shifts focus away from the subgroup P to the class X. If X is sufficiently nice, then there will be a nicely embedded X-subgroup for any finite group.



Sylow p-subgroups also satisfy a dual condition, they are also X-injectors for the class X of finite p-groups. If X is a class of finite groups, and G is a finite group, then P is an X-injector of G if for every subnormal subgroup N of G, P∩N is a maximal X-subgroup of N. The dual definition of covering group (for X a saturated formation) is that P is an X-covering group iff PN/N is a maximal X-subgroup of G/N for every N ⊴ G. If X is the class of finite nilpotent groups, then X-injectors are called Fischer subgroups and again form a single, well-behaved conjugacy class of subgroups. A Fischer subgroup of a finite soluble group is a nilpotent subgroup that contains every nilpotent subgroup that it normalizes. This is similar to the idea that a Sylow p-subgroup contains every p-group that it normalizes. X such that X-injectors form a unique conjugacy class are called Fitting classes, due to their similarity to Fitting's lemma on subnormal nilpotent subgroups.



A very approachable introduction to these ideas is B.F. Wehrfritz's tiny textbook for a Second Course on Group Theory. Some of these ideas are described in Robinson's textbook for a Course in the Theory of Groups, but I believe it spends very little time on general formations. The standard textbook source for formations, especially in the soluble universe, is K. Doerk and T. Hawkes's book Finite Soluble Groups. Doerk&Hawkes explains several of Gaschütz's arithmetically defined Xs, which you might find a good contrast.

Sunday, 27 September 2015

knot theory - slice-ribbon for links (surely it's wrong)

The slice-ribbon conjecture asserts that all slice knots are ribbon.



This assumes the context:



1) A `knot' is a smooth embedding $S^1 to S^3$. We're thinking of the 3-sphere as the boundary of the 4-ball $S^3 = partial D^4$.



2) A knot being slice means that it's the boundary of a 2-disc smoothly embedded in $D^4$.



3) A slice disc being ribbon is a more fussy definition -- a slice disc is in ribbon position if the distance function $d(p) = |p|^2$ is Morse on the slice disc and having no local maxima. A slice knot is a ribbon knot if one of its slice discs has a ribbon position.



My question is this. All the above definitions have natural generalizations to links in $S^3$. You can talk about a link being slice if it's the boundary of disjointly embedded discs in $D^4$. Similarly, the above ribbon definition makes sense for slice links. Are there simple examples of $n$-component links with $n geq 2$ that are slice but not ribbon? Presumably this question has been investigated in the literature, but I haven't come across it. Standard references like Kawauchi don't mention this problem (as far as I can tell).

nt.number theory - Given N points on a number line and m total distances between those points, are there efficent ways to optimize for particular values in m?

Given a set of distances S, choose N unique points P on a number line such that the distances between the N points occur in S as much as possible. That is, maximize the occurence in S of the distances between the N points.



For example:



S = {2, 4}, N = 4



One answer would be P = {2, 4, 6, 8}, since the distances between the points P are 2, 2, 2, 2, 4, 4, 6. Only 6 is not in S.



or



S = {7, 13, 14, 22}
N = 4501



answer ???



I'm not looking for an exact answer (although an exact answer wouldn't hurt) but rather I am trying to avoid reinventing the wheel (fun though it may be). What mathematical tools could I use to avoid bruteforcing the possible values of P. How should how would you approach this problem?

Shortest Paths on fractals

As to the border case. An example that you might like to consider is given by the blanc-mange curve, $f_{lambda}:mathbb{R}tomathbb{R} $, that for any value of $0leqlambdaleq 1,$ is defined as the unique bounded solution of the fixed point functional equation



$f(x)=mathrm{dist }(x,mathbb{Z})+lambda f(2x)$



(by the contraction principle there is a unique such function; it is continuous and 1-periodic, with an immediate series expansion coming from the iteration).



Consider $f_{lambda}$ on the unit interval. If you take $lambda=1/4$ you find a parabola; with $lambda < 1/2$ it's Lipschitz (hence the graph has a finite length) with constant $(1-2lambda)^{-1}$; if $lambda > 1/2$ it's Hoelder continuous with an exponent depending from $lambda$. The parameter 1/2 is critical: you find a curve that is not Lipschitz, but it's Hoelder of all exponents $alpha>0$, precisely, it has a modulus of continuity ct|log(t)|, and looking at it a bit more closely, it is not of bounded variation on any nontrivial interval (so the graph is not rectifiable even locally), nor is BV for any $lambda geq 1/2.$

lie algebras - Folding by Automorphisms

One very important use of this technique is the relation between Lie algebras / quantum groups and quiver varieties. I first saw something about this in Lusztig's book, Introduction to Quantum Groups; but see also this arXiv paper Alistair Savage. Quiver varieties are important for categorifying many structures related to a simple Lie algebra (its representation theory, its enveloping algebra, etc.). Categorification is a long story that leads to all kinds of interesting things, and it is a sequel to the long story of quantum groups themselves. But even if you're not learning about either one for their own sake, Lusztig already needed it to prove properties of his canonical bases of representations of simple Lie algebras.



A Dynkin-type quiver is an orientation of a Dynkin diagram. A quiver representation is a collection of maps between vector spaces in the pattern of the diagram. A quiver variety is then a variety of (certain of) these representations, for fixed choices of the vector spaces. The point is that you can only define a quiver for a simply laced Dynkin diagram. You need the folding automorphism to obtain quiver varieties or information from quivers in general in the multiply laced case.

Saturday, 26 September 2015

dg.differential geometry - Reference for the Frolicher-Nijenhuis Bracket

I also only learned about Frolicher-Nijenhuis brackets from Saunders' book on jets but I doubt that there is any other authorative reference besides Michor's book and the original papers. I don't know if this is what the original poster intended, but here's how I understand the link between FN and curvature. Generally, I tend to stay away from the full generality of the FN bracket, and only use those (axiomatic) properties that I need...



Let $pi: E rightarrow M$ be a fiber bundle. An Ehresmann connection is a subbundle $H$ of $TE$ such that $H oplus VE = TE$, where $VE$ is the vertical bundle. Denote the projector from $TE$ onto $H$ by $h$. Saunders (and many other authors) define Ehresmann connections directly in terms of bundle maps $h$, since they are easier to work with, but this doesn't matter.



Now, to introduce curvature, we would like to formalize the idea that curvature is the failure of "parallel transport" to commute, where of course we haven't define parallel transport properly. However, it is only a small step of the imagination to guess that this must be related to the integrability of $H$, i.e.whether $[h(X), h(Y)] in H$ for arbitrary vector fields $X, Y$. The failure of two horizontal vector fields to be horizontal again is measured by the expression
$$
R(X,Y) = [h(X), h(Y)] - h([h(X), h(Y)]),
$$
and in fact this is nothing but Saunders' definition 3.5.13 of curvature. Note that $R(X, Y) = 0$ if either $X$ or $Y$ is in $VE$.



The Frohlicher-Nijenhuis bracket of two linear bundle maps is in general quite complicated, but if you look at proposition 3.4.15 in Saunders, you get that for a linear bundle map $h: TE rightarrow TE$,
$$
[h, h] (X,Y) = 2(h([X,Y]) + [h(X), h(Y)] - h([h(X),Y]) - h([X,h(Y)])).
$$
Now note that the right-hand side is zero as soon as either $X$ or $Y$ is in $VE$, just as for $R$. If both $X$ and $Y$ are horizontal, we can apply the above formula to $X = h(X)$ and $Y = h(Y)$, and conclude that $[h, h] (X, Y) = 2( [h(X), h(Y)] - h([h(X), h(Y)]))$ (where we use the identity $h circ h = h$ liberally), and so
$$
R = frac{1}{2} [h,h].
$$



If you have a principal fiber bundle, $h$ is related to the connection one-form $mathcal{A}$, and the above formula gives the curvature of $mathcal{A}$ in terms of the covariant differential of $mathcal{A}$. For a symplectic connection, something similar happens.



Edit: here's how I think it works for principal fiber bundles. Take a connection one-form $A: TE to mathfrak{g}$, and let $sigma: mathfrak{g} rightarrow VE$ be the infinitesimal generator of the $G$-action. The composition $v := sigma circ mathcal{A}$ is then the vertical projector of the connection and $h := 1 - v$ is the horizontal one. Now plug this expression for $h$ into the formula for the curvature:
$$
[h, h] = [1, 1] - [sigma circ mathcal{A}, 1] - [1, sigma circ mathcal{A}]
+ [sigma circ mathcal{A}, sigma circ mathcal{A}].
$$
The term $[1,1]$ is zero, and you can use Saunders' proposition 3.4.15 to show that term 2 and 3 vanish as. The last term can be written as (again using S3.4.15) as
$$
[sigma circ mathcal{A}, sigma circ mathcal{A}] (X, Y) =
2( (sigma circ mathcal{A})^2([X, Y]) + [ (sigma circ mathcal{A})(X), (sigma circ mathcal{A})(Y)] - (sigma circ mathcal{A})([(sigma circ mathcal{A})(X), Y)]) - (sigma circ mathcal{A})([X, (sigma circ mathcal{A})(Y)])).
$$



Now, show that this vanishes whenever $X$ or $Y$ is vertical, so that we can take $X$ and $Y$ to be horizontal. In that case, the above simplifies to
$$
[sigma circ mathcal{A}, sigma circ mathcal{A}] (X, Y) = 2(sigma circ mathcal{A})([X, Y])
$$
or
$$
R(X, Y) = (sigma circ mathcal{A}) ([X^h, X^h])
$$
where $X^h$ represents the horizontal part of $X$. But $mathcal{A} ([X^h, X^h])$ is just the negative of the curvature (as a two-form with values in $mathfrak{g}$) of $mathcal{A}$, so that
$$
R(X, Y) = -sigma ( mathcal{B}(X, Y) ).
$$

cv.complex variables - Is there an Isomorphism between R and C under addition?


Possible Duplicate:
AC in group isomorphism between R and R^2




Somewhere, I recall being told that there is an isomorphism between $mathbb{R}$ and $mathbb{C}$ under addition. However, despite a rather lengthy search, I have been unable to find anything to support this fact, although Paul Yale of Pomona College, in his paper, "Automorphisms of the Complex Numbers," showed that there are "wild" automorphisms of $mathbb{C}$ that require the axiom of choice to construct. Given that rather surprising fact, it does not seem too unlikely that there could be an isomorphism between $mathbb{R}$ and $mathbb{C}$. So, the question is: Is it possible for there to be an isomorphism between $mathbb{R}$ and $mathbb{C}$, and if so, what is is?

ct.category theory - Simple show cases for the Yoneda lemma

I've been given a very simple motivating and instructive show case for the Yoneda lemma:



Given the category of graphs and a graph object $G$, seen as a quadruple $(V_G, E_G, S_G:Erightarrow V, T_G:E rightarrow V)$.



Consider $K_1$ and $K_2$, the one-vertex and the one-edge graph and the two morphisms $sigma$ and $tau$ from $K_1$ to $K_2$.



Now consider the graph $H$ with



  • $V_H = Hom(K_1,G)$

  • $E_H = Hom(K_2,G)$

  • $S_H(e) = e circ sigma: K_1 rightarrow G$ for $e in E_H$

  • $T_H(e) = e circ tau: K_1 rightarrow G$ for $e in E_H$

It can be easily seen that $H$ is isomorphic to $G$.



I have learned that a) the category of graphs is a presheaf category and that b) $K_1$, $K_2$ are precisely the representable functors.



Now I am looking for other simple motivating and instructive show cases.



By the way: Shouldn't such an show case be added to the Wikipedia entry on Yoneda's lemma?

Friday, 25 September 2015

cobordism - What manifolds are bounded by RP^odd?

Take the complex surface $Z_1 ^2 + Z_2 ^2 + Z_3^2 =1$ in complex 3-space, $mathbb{C}^3$ and intersect it with the ball $|Z_1|^2 + |Z_2|^2 + |Z_3|^2 le 1$ to get an explicit 4-manifold with boundary embedded in $mathbb{C}^3$ whose boundary is $mathbb{R}P^3$, realized by intersecting the complex surface with the 5-sphere $|Z_1|^2 + |Z_2|^2 + |Z_3|^2 = 1$.



To verify, split $Z_a = x_a + i y_a$ into its real and imaginary parts $x_a, y_a$, do a bit of algebra and see that the hypersurface is diffeomorphic to the tangent bundle of the standard two-sphere $S^2$, that two-sphere being realized within $(x_1, x_2, x_3)$ space by setting the $y_a = 0$. (I've heard this complex surface called the 'hypersphere' or 'complex sphere' or some such.)



Intersecting the hypersurface with the 5-ball $|Z_1|^2 + |Z_2|^2 + |Z_3|^2 le 1$ is the same as taking the disc bundle of Tim Perutz's construction.



Setting $|Z_1|^2 + |Z_2|^2 + |Z_3|^2 = 1$ within the hypersurface is the $RP^3$ which bounds the 4-manifold.



So, you can take your 4-manifold to be a ``Stein domain'' in standard $mathbb{C}^3$.




Even more explicitly, with a CR-aside:



After some rescalings, this embedding of $RP^3$ is essentially Rossi's example of an analytic CR structure on the three-sphere which admits no CR embedding into any $C^n$.



Consider the map from $mathbb{C}^2$ to $mathbb{C}^3$ given by
$$Z_1 = i[ (z^2 + w^2) + t (bar z ^2 + bar w ^2)]$$
$$Z_2 = [ (z^2 - w^2) - t (bar z ^2 - bar w ^2)]$$
$$Z_3 = 2 [ zw - t bar z bar w]$$
where $t$ is real and $i = sqrt{-1}$.



A computation shows that
$Z_1 ^2 + Z_2 ^2 + Z_3^2 =-4 t (|z|^2 + |w|^2)^2$
while
$|Z_1|^2 + |Z_2|^2 + |Z_3|^2 = 2(1 + t^2)(|z|^2 + |w|^2)^2.$
It follows that the image of the standard $S^3$ in $C^2$ is the
complex hypersurface
$Z_1 ^2 + Z_2 ^2 + Z_3^2 =-4 t $
intersected with the 5-ball
$|Z_1|^2 + |Z_2|^2 + |Z_3|^2 = 2(1 + t^2)$.



Since the map $(z, w) to (Z_1, Z_2, Z_3)$
is $2:1$ restricted to $S^3$, its image is $mathbb{R}P^3$.



Rossi's 'no CR embedding' assertion for $t ne 0$ relates to the induced CR structure on $S^3$ (via the pull-back of its image's CR structure). When $t ne 0$
every CR function for this CR structure on $S^3$ is an analytic function of these $Z_i (z,w)$'s, and hence is invariant under the antipodal map $(z,w) to (-z, -w)$. Thus the CR functions for these `twisted' CR structures on $S^3$ cannot separate (antipodal) points and hence the $S^3$ cannot be CR-embedded.



It is a fun fact that all left-invariant CR structures on $SU(2) = S^3$ arise this way, with the standard CR structure corresponding to $t = 0$.



some references:



H. Rossi, Attaching analytic spaces to an analytic space along a pseudoconcave
boundary. 1965 Proc. Conf. Complex Analysis (Minneapolis, 1964)
pp. 242–256, Springer, Berlin.



D. Burns, Global behavior of some tangential Cauchy-Riemann equations
in “Partial Differential Equations and Geometry” (Proc. Conf., Park City,
Utah, 1977); Dekker, New York, 1979, p. 51.



E. Falbel, Non-embeddable CR-manifolds and Surface Singularities. Invent.
Math. 108 (1992), No. 1, 49-65.

soft question - Examples of great mathematical writing

This question is basically from Ravi Vakil's web page



How do I write mathematics well? Learning by example is more helpful than being told what to do, so let's try to name as many examples of "great writing" as possible. Asking for "the best article you've read" isn't reasonable or helpful. Instead, ask yourself the question "what is a great article?", and implicitly, "what makes it great?"



If you think of a piece of mathematical writing you think is "great", check if it's already on the list. If it is, vote it up. If not, add it, with an explanation of why you think it's great. This question is "Community Wiki", which means that the question (and all answers) generate no reputation for the person who posted it. It also means that once you have 100 reputation, you can edit the posts (e.g. add a blurb that doesn't fit in a comment about why a piece of writing is great). Remember that each answer should be about a single piece of "great writing", and please restrict yourself to posting one answer per day.



I refuse to give criteria for greatness; that's your job. But please don't propose writing that has a major flaw unless it is outweighed by some other truly outstanding qualities. In particular, "great writing" is not the same as "proof of a great theorem". You are not allowed to recommend anything by yourself, because you're such a great writer that it just wouldn't be fair.



Not acceptable reasons:



  • This paper is really very good.

  • This book is the only book covering this material in a reasonable way.

  • This is the best article on this subject.

Acceptable reasons:



  • This paper changed my life.

  • This book inspired me to become a topologist. (Ideally in this case it should be a book in topology, not in real analysis...)

  • Anyone in my field who hasn't read this paper has led an impoverished existence.

  • I wish someone had told me about this paper when I was younger.

Wednesday, 23 September 2015

ag.algebraic geometry - What is an example of a smooth variety over a finite field F_p which does not embed into a smooth scheme over Z_p?

EDIT 7/15/14 I was just looking back at this old answer, and I don't think I ever answered the stated question. I can't delete an accepted answer, but I'll point at that, as far as I can tell, the Vakil reference I give also only address the question of deforming $X$ over $mathbb{Z}_p$, not of embedding it in some larger flat family over $mathbb{Z}_p$.



EDIT Oops! David Brown points out below that I misread the question. I was answering the question of finding a smooth scheme which does not deform in a smooth family over Z_p.



Well, to make up for that, I'll point to some references which definitely contain answers. Look at section 2.3 of Ravi Vakil's paper on Murphy's law for deformation spaces http://front.math.ucdavis.edu/0411.5469 for some history, and several good references. Moreover, Ravi describes how to build an explicit cover of P^2 in characteristic p which does not deform to characteristic 0. Basically, the idea is to take a collection of lines in P^2 which doesn't deform to characteristic 0 and take a branched cover over those lines. For example, you could take that p^2+p+1 lines that have coefficients in F_p.

ag.algebraic geometry - Tangent sheaf of $mathbb{P}^1timesmathbb{P}^1$

Hi guys!!I'm new in this forum. I have a simple question for you. Let $k$ an algebraically closed field. Consider $mathbb{P}^1timesmathbb{P}^1$ and $T_{mathbb{P}^1timesmathbb{P}^1}$ the tangent sheaf. How can I prove that $dim_k Gamma(mathbb{P}^1timesmathbb{P}^1,T_{mathbb{P}^1timesmathbb{P}^1})=6$???



Good...I'm so stupid...It's enough observe that $Omega_{Aotimes_R B/R}cong AotimesOmega_{B/R}oplusOmega_{A/R}otimes B$. Then dualize this and use the Euler sequence for $mathbb{P}^1$

ag.algebraic geometry - About b-divisors

It can be convenient sometimes, eg: http://arxiv.org/abs/math/0608260. They use this language to define their positive intersection product, which is defined in terms of supremums over all choices of certain nef classes on all birational modifications. Note that they want classes, not numbers, for their statements about the derivative of the volume.

ag.algebraic geometry - Why do gerbes live in H^2 ?

I have a couple things to say.



First, believe your definition of gerbe is slightly incorect. When you say that your stack is locally isomorphic to $U times Bmathbb{G}_m$, this isomorphism needs to preserve some additional structure. It might be okay for $mathbb{G}_m$-gerbes, by accident, but for general non-abelian gerbes you will run into trouble. (It might stil be okay for $mu$-gerbes, where $mu$ is a sheaf of abelian groups over X).



There are several ways to add this extra structure, but I think the most common are not necessarily the most enlightening. The fact of the matter is that $Bmathbb{G}_m$ is a group object in stacks and it "acts" on the gerbe over $X$. The local isomorphism to $U times Bmathbb{G}_m$ needs to respect this action. Morally, you should think of a gerbe as a principal bundle with structure "group" $Bmathbb{G}_m$.



The reason that this isn't the most common way to explain what a gerbe is, is that making this precise requires a certain comfortability with 2-categories and coherence equations that most people don't seem to have. Times are changing though. Just as for ordinary principal bundles, you can (in nice settings, say noetherian separated) classify them in terms of Cech data. When you do this you see that the only important part is the coherence data, which gives a 2-cocycle. For non-abelian gerbes you get non-trivial stuff which mixes together parts which look like data for a 1-cocycle and a 2-cocycle. I agree with Kevin that, at this point, if you really want to understand this stuff you should fill-in the rest of the details on your own. It is a good exercise!



Alternatively, if higher categories make you uncomfortable, you can be cleaver. You can still make a definition along the lines of the one you outline precise without venturing into the world of higher categories and "coherent group objects in stacks". I recommend Anton's course notes on Stacks as taught by Martin Olsson. Section 31 has a definition of $mu$-gerbes which is equivalent to the one I sketched above but avoids the higher categorical aspects. There is also a proof that such gerbes are classified by $H^2(X; mu)$. Enjoy!




Just to reiterate. In a gerbe you are not patching together classifying spaces, you are patching together classifying stacks. Despite the common notation, there is a difference. A stack is fundamentally an object in a 2-category. This means that you need to deal with 2-morphisms and that they can be just as important as the 1-morphisms. For $Bmathbb{G}_m$, the 1-morphisms (which preserve the multiplication
action of the stack $B mathbb{G}_m$ !!) are all equivalent, so there is no Cech 1-cocycle data at all. All you get are the coherence data, which form a 2-cocycle.



This is one reason that I prefer the notation $[pt/mathbb{G}_m]$ to denote the stack $Bmathbb{G}_m$. This is particularly important in the topological setting where these are truly different objects.

computational complexity - Does the independence of P = NP imply existence of arbitrarily good super-polynomial upper bound for SAT?

Let me first define what "super-polynomial" means.



Definition. We call a function f super-polynomial if for all k, there exists a constant n such that for all x ≥ n, f(x) > xk.



Now please judge whether the following claim is true.



Claim. Suppose P = NP is independent of ZFC. Then for any super-polynomial function f, there exists an algorithm for SAT whose worst case running time is bounded by f.



OK. To help your judgement, I will give you my proof of the claim. Of course it may be wrong or missing something.



Proof. Proof by contradiction. Suppose that there is no such SAT algorithm. Then all algorithms for SAT are not bounded by f. Note that f is not bounded by any polynomial function of the form g(x) = xk for some k. Therefore no SAT algorithm is bounded by a polynomial. This means SAT ∉ P and hence P ≠ NP. We have just shown that P ≠ NP is provable, contradicting with our assumption that P = NP is independent. QED



If my proof is correct, isn't this claim too obvious? OK, but why there are still people publishing weaker claims, for example the main result of this paper? Did I misinterpret their result? Is it actually stronger than my claim?



Also, this claim is an example for the strength of the independence of P = NP. If it's indeed independent, then all statements that imply P = NP are false and all statements that imply P ≠ NP are also false. An example of the former is "There exists a polynomial time SAT solver." Since this is false, then there does not exist any polynomial time SAT solver. And the latter implies whatever super-polynomial time bound you give, there is a SAT solver with this bound. So if independence is true, then the picture is that there is an infinite sequence of (SAT solver, time bound) pairs, with each bound faster than the previous, approaching the limit of P. And yet, it never crosses the boundary. So if independence is true, we can expect to improve the time bound for SAT endlessly in the region of super-polynomial and yet never reach P. Now I hope you have a clearer picture of the possibility that P = NP is independent.



Sorry for the digression, but first of all, is the claim true?

fa.functional analysis - How hard is it to make a differential operator Hermitian?

Let $M$ be a closed finite-dimensional smooth manifold (over $mathbb R$). Let $C^infty(M) = C^infty(M,mathbb C)$ be the algebra of smooth complex-valued functions on $M$, with the natural complex conjugation $fmapsto bar f$.



Definition: A (real) density $mu$ on $M$ is a section of the trivial real line bundle given in local coordinates by the transition maps $tildemu = bigl| det frac{partial x^i}{partial tilde x^j} bigr| mu$, where $x^i,tilde x^j$ are the different systems of coordinates: if $M$ is oriented, then densities are the same as (real) top-forms. A volume form on $M$ is an everywhere-positive density — this condition is invariant under changes of coordinates.



We remark that if $mu$ is a density, then $int_M mu$ is well-defined: cut $M$ into coordinate patches, integrate each patch, and check that the answer doesn't depend on the choice of coordinates.



A choice of volume form $mu$ determines a Hermitian structure on $C^infty(M)$, via $langle f_1,f_2rangle_mu = int_M {bar f}_1 , f_2 , mu$. A (complex) differential operator $mathcal D: C^infty(M) to C^infty(M)$ is $mu$-Hermitian if $bigllangle f_1, mathcal D[f_2] bigrrangle_mu = bigllangle mathcal D[f_1], f_2 bigrrangle_mu$.



Example: Multiplication by a real-valued function $c$ is Hermitian for any measure. Let $a$ be a Riemannian metric on $M$. Then $sqrt{left|det aright|}$ makes sense as a volume-form on $M$. The Laplace-Beltrami operator on $M$ is given in local coordinates by $Delta_a = -left|det aright|^{-1/2} partial_i a^{ij} left|det aright|^{1/2} partial_j$. It is $sqrt{left|det aright|}$-Hermitian. More generally, let $b$ be any real one-form on $M$. Then I believe that the operator given in local coordinates by:
$$ frac1{sqrt{|det a|}} bigl(sqrt{-1}partial_i + b_ibigr) a^{ij} sqrt{|det a|} bigl( sqrt{-1} partial_j + b_jbigr) $$
is $sqrt{left|det aright|}$-Hermitian.



I believe that the following is true (I've checked it by hand for $M=$ the circle):



Propoposition: Let $a$ be Riemannian metric, $Delta_a$ its Laplace-Beltrami operator, and $mu$ a volume form. Then $Delta_a$ is $mu$-Hermitian if and only if $mu/sqrt{left|det aright|}$ is locally constant.



The if part I said above. Thus, by completing the square, it's straightforward to check whether a second-order differential operator can be made Hermitian with the correct choice of volume form (basically, the factors of $sqrt{-1}$ must be as above). My question is whether you can do this for higher-order differential operators.




Question: Suppose I have a linear differential operator $mathcal D$, with, say, third- or higher derivatives. How do I determine whether there exists a volume-form $mu$ so that $mathcal D$ is $mu$-Hermitian? If such a $mu$ exists, how do I find one? How many are there?




For example, on the circle with the usual volume form, the fourth derivative $partial^4$ is Hermitian, so some of them are. But maybe most of them are not....



Please re-tag as you see fit.

Tuesday, 22 September 2015

gt.geometric topology - Closed 3-manifolds with free abelian fundamental groups

John Hempel, in his book $3$-manifolds, shows that if $G$ is a finitely generated abelian group which is a subgroup of the fundamental group of a closed $3$-manifold, then $G$ is one of $mathbb Z$, $mathbb Zoplusmathbb Z$, $mathbb Zoplusmathbb Z oplusmathbb Z$, $mathbb Z_p$ or $mathbb Zoplusmathbb Z_2$. This is theorem 9.13 in the book.



He also proves, in theorem 9.14, that an abelian group which is not finitely generated and a subgroup of the fundamental group of a $3$-manifold, then it is isomorphic to a subgroup of $mathbb Q$ (and proposes, as an exercise, to show that all such groups in fact occur)

gr.group theory - The fraction of groups with normal Hall subgroups

Dear all,



I am a student on computer science. So please forgive me if I state some results in a weird way, and I hope I ask an interesting question to you.



The question is related to finite groups with normal Hall subgroups. I want to know for groups of size $n$, what is the fraction of groups with normal Hall subgroups compared to all groups, up to isomorphism.



For example, we know that for given s, the number of non-isomorphic groups of size $n$ is bounded by $n^{O((log n)^2)}$. While I can prove that for certain class of groups with normal Hall subgroup, for a given n, the number of non-isomorphic groups of size n can be $n^{Omega(log n)}$. But I would like to know an upper bound.



Thank you very much.



Jimmy

Monday, 21 September 2015

big list - Examples of common false beliefs in mathematics

False statement: If $A$ and $B$ are subsets of $mathbb{R}^d$, then their Hausdorff dimension $dim_H$ satisfies



$$dim_H(A times B) = dim_H(A) + dim_H(B).
$$



EDIT: To answer Benoit's question, I do not know about a simple counterexample for $d = 1$, but here is the usual one (taken from Falconer's "The Geometry of Fractal Sets"):



Let $(m_i)$ be a sequence of rapidly increasing integers (say $m_{i+1} > m_i^i$). Let $A subset [0,1]$ denote the numbers with a zero in the $r^{th}$ decimal place if $m_j + 1 leq r leq m_{j+1}$ and $j$ is odd. Let $B subset [0,1]$ denote the numbers with a zero in the $r^{th}$ decimal place if $m_{j} + 1 leq r leq m_{j+1}$ and $j$ is even. Then $dim_H(A) = dim_B(A) = 0$. To see this, you can cover $A$, for example, by $10^k$ covers of length $10^{- m_{2j}}$, where $k = (m_1 - m_0) + (m_3 - m_2) + dots + (m_{2j - 1} - m_{2j - 2})$.



Furthermore, if $mathcal{H}^1$ denotes the Hausdorff $1$-dimensional (metric) outer measure of $E$, then the result follows by showing $mathcal{H}^1(A times B) > 0$. This is accomplished by considering $u in [0,1]$ and writing $u = x + y$, where $x in A$ and $y in B$. Let $proj$ denote orthogonal projection from the plane to $L$, the line $y = x$. Then $proj(x,y)$ is the point of $L$ with distance $2^{-1/2}(x+y)$ from the origin. Thus, $proj( A times B)$ is a subinterval of $L$ of length $2^{-1/2}$. Finally, it follows:



$$
mathcal{H}^1(A times B) geq mathcal{H}^1(proj(A times B)) = 2^{-1/2} > 0.
$$

matrices - How can I characterize the type of solution vector that comes out of a matrix?

If and only if all the entries of $A^{-1}$ are non-negative.



Proof: If $(A^{-1})_{ij}$ is negative, and $b$ is $1$ in the $j$-th coordinate and very small in every other, then $A^{-1} b$ is negative in the $i$-th component.



On the other hand, if every entry of $A^{-1}$ is non-negative, then clearly $b$ positive implies $A^{-1} b$ positive.

computability theory - What are the most attractive Turing undecidable problems in mathematics?

Not mentioned yet, that any computer language extended with non-deterministic features is also Turing computable.



This is interesting, because it allows the language to be simplified. If the programs operate on objects that are nil or a pair. Then you only need five instructions:



  • The constant nil

  • A pair operator

  • A sequence operator, that executes one code fragment after another

  • An inverse operator

  • A closure operator, which repeats a code fragment zero or multiple times

If you want to construct a piece code of that adds the two values of a pair, then first make something that construct (a - 1, b + 1) from (a, b). Then take the closure. This will generate (a - n, b + n). Finally, pick the value (0, c) and output c. This can be done by using the inverse operator on the pair and nil.



So, programming is a little bit odd, because you select the right value outside the loop (closure), rather than inside the loop, as in deterministic languages. The advantages is the much more simpler structure. No variables, no recursion, no matching operators (just use the inverse) and no control-structures, except closure.



This makes it a little bit between programming and mathematics. The simpler structure, allows easier mathematical reasoning. So, it might be an idea to convert a program in a deterministic language, to a program of a simplified non-deterministic language, before doing any mathematics on the program.



Lucas

finite groups - An inequality for certain characters

First some definitions: Let $cd(G)$ be the set of degrees of irreducible complex characters of the group $G$. Let $X$ be the collection of finite groups with the properties: ${1}in X$. If $G$ is a finite group such that for each $nin cd(G)$, $G$ has a subgroup $H$ of index $n$ with $Hin X$, then $Gin X$ (so the groups of $X$ are those with subgroups of all indices from character degrees, where these subgroups themselves have the same property).



Let $G$ be a finite group whose complex irreducible character degrees are $1 = f_1 < f_2 < dots < f_t$.
For a character $chi$ of $G$, let $s(chi)$ be the $k$ such that if $psi$ is an irreducible constituent of $chi$ of smallest possible degree, then $psi(1) = f_k$.
Let $v(chi)$ be the $k$ such that if $psi$ is an irreducible constituent of $chi$ of largest possible degree, then $psi(1) = k$.



What I am trying to prove (and hoping holds) is that if $Gin X$ and $chi$ is an irreducible character of $G$ of degree $n = f_k$, then $G$ has a subgroup $H$ of index $n$ such that $Hin X$ and additionally, $$s(chi_H) + min{v(psi^G) | psiin Irr(H)}leq k$$



The motivation for this is that if I can prove this, then I can prove the Taketa inequality for groups in $X$.
Does anyone know if the result holds, or can someone find an example where it does not.
It would still be interesting if the result turns out to hold under some more restrictions on $X$, like requiring the existence of subgroups of a larger set of orders, or requiring more structure on those subgroups.



Edit: Disregard this question. I will ask a related but different question What conditions guarantee that a group is in the following collection? which turned out be what really mattered.

Sunday, 20 September 2015

arithmetic geometry - Is there a classification of embeddings of SL_2 into SP_6 as algebraic groups over Q and R respectively?

You probably want the Jacobson–Morozov theorem, which says that homomorphisms of the Lie algebra sl2 over a field of characteristic 0 to a semisimple Lie algebra g can be classified in terms of the nilpotent elements of g. More precisely, if e, f, h, is the usual basis of sl2 then you can choose the image of e to be any nilpotent element of g, and the images of f and h are then determined up to conjugation by the centralizer of e.



For details see Jacobson's book on Lie algebras.

lo.logic - Why can't proofs have infinitely many steps?

I recently saw the proof of the finite axiom of choice from the ZF axioms. The basic idea of the proof is as follows (I'll cover the case where we're choosing from three sets, but the general idea is obvious): Suppose we have $A,B,C$ non-empty, and we would like to show that the Cartesian product $A times B times C$ is non-empty. Then $exists a in A$, $exists b in B$, $exists c in C$, all because each set is non-empty. Then $a times b times c$ is a desired element of $A times B times C$, and we are done.



In the case where we have infinitely (in this case, countably) many sets, say $A_1 times A_2 times A_3 times cdots$, we can try the same proof. But in order to use only the ZF axioms, the proof requires the infinitely many steps $exists a_1 in A_1$, $exists a_2 in A_2$, $exists a_3 in A_3$, $cdots$



My question is, why can't we do this? Or a better phrasing, since I know that mathematicians normally work in logical systems in which only finite proofs are allowed, is: Is there some sort of way of doing logic in which infinitely-long proofs like these are allowed?



One valid objection to such a system would be that it would allow us to prove Fermat's Last Theorem as follows: Consider each pair $(a,b,c,n)$ as a step in the proofs, and then we use countably many steps to show that the theorem is true.



I might argue that this really is a valid proof - it just isn't possible in our universe where we can only do finitely-many calculations. So we could suggest a system of logic in which a proof like this is valid.



On the other hand, I think the "proof" of Fermat's Last Theorem which uses infinitely many steps is very different from the "proof" of AC from ZF which uses infinitely many steps. In the proof of AC, we know how each step works, and we know that it will succeed, even without considering that step individually. In other words, we know what we mean by the concatenation of steps $(exists a_i in A_i)_{i in mathbb{N}}$. On the other hand, we can't, before doing all the infinitely many steps of the proof of FLT, know that each step is going to work out. What I'm suggesting in this paragraph is a system of logic in which the proof of AC above is an acceptable proof, whereas the proof of FLT outlined above is not acceptable.



So I'm wondering whether such a system of logic has been considered or whether any experts here have an idea for how it might work, if it has not been considered. And, of course, there might be a better term to use than "system of logic," and others can give suggestions for that.

rt.representation theory - Are there interesting monoidal structures on representations of quantum affine algebras?

The fusion product for affine Lie algebras is closely related to the existence of "evaluation homomorphisms" from the loop algebra to the finite-dimensional semisimple Lie algebra g, which split the natural inclusion of g as the subalgebra of constant loops. In the quantum case there is no evaluation map from the quantum affine algebra to the finite-type quantum algebra outside of type A (this is proved - at least for Yangians - in Drinfeld's original paper I'm pretty sure).



You see consequences of this in lots of places: e.g. for representations of g, the evaluation homomorphisms mean any irreducible representation for g can be lifted to an irreducible representation of the affine Lie algebra Lg. On the other hand, irreducible representations of the associated quantum groups do not (necessarily) lift to representations of quantum affine algebras, and so one asks about "minimal affinizations" -- irreducible finite dimensional representations of the quantum affine algebra which have the given irreducible as a constituent when restricted to the finite-type quantum group.



That said, the "ordinary" tensor product for finite dimensional representations of quantum affine algebras is pretty interesting -- it's not braided any more for example.

ac.commutative algebra - non-Dedekind Domain in which every ideal is generated by at most two elements

By way of comparison, Dedekind domains are characterized by an even stronger property, sometimes referred to colloquially as "$1+epsilon$''-generation of ideals. Namely:



Theorem: For an integral domain $R$, the following are equivalent:
(i) $R$ is a Dedekind domain.
(ii) For every nonzero ideal $I$ of $R$ and $0 neq a in I$, there exists $b in I$ such that $I = langle a,b rangle$.



The proof of (i) $implies$ (ii) is such a standard exercise that maybe I shouldn't ruin it by giving the proof here. That (ii) $implies$ (i) is not nearly as well known, although sufficiently faithful readers of Jacobon's Basic Algebra will know it: he gives the result as Exercise 3 in Volume II, Section 10.2 -- "Characterizations of Dedekind domains" -- and attributes it to H. Sah. (A MathSciNet search for such a person turned up nothing.) The argument is as follows: certainly the condition implies that $R$ is Noetherian, and a Noetherian domain is a Dedekind domain iff its localization at every maximal ideal is a DVR. The condition (ii) passes to ideals in the localization, and the killing blow is dealt by Nakayama's Lemma.

Saturday, 19 September 2015

discrete geometry - What is the oriented Fano plane?

Here is one answer: It is an oriented line over $mathbb{F}_7$.



An affine line over $mathbb{F}_7$ is a set of 7 points with a simply transitive action of $mathbb{Z}/7mathbb{Z}$, but no distinguished origin. Here, we don't have a distinguished origin and we also don't remember the precise translation action, but we have a distinguished notion of addition by a square (think of what this would mean for real numbers). In other words, it is a set with seven elements, equipped with an unordered triple of simply transitive actions of $mathbb{Z}/7mathbb{Z}$, such that translation by 1 under one of the actions is equivalent to translation by the square classes $2$ and $4$ under the other two actions.



If you take any pair of points $(x,y)$ in the above picture and subtract their indices, the orientation of the arrow between them is $x to y$ if and only if $y-x$ is a square mod 7. Furthermore, a triple of points $(x,y,z)$ with directed arrows $x to y to z$ is collinear if and only if $frac{z-y}{y-x} = 2$. Even though the numerator and denominator are only well-defined up to multiplication by squares, the quotient is a well-defined element of $mathbb{F}_7^times$, since each of the three translation actions yield the same answer. These two data let us reconstruct the diagram from the oriented line structure.



There is a group-theoretic interpretation of this object. The oriented hypergraph you've given has automorphism group of order 21, generated by the permutations $(1234567)$ (one of the translation actions) and $(235)(476)$ (changes translation action by conjugating). This can be identified with the quotient $B^+(mathbb{F}_7)/mathbb{F}_7^times$, where $B^+(mathbb{F}_7)$ is the group of upper triangular matrices with entries in $mathbb{F}_7$ and invertible square determinant, and $mathbb{F}_7^times$ is the subgroup of scalar multiples of the identity. This group is the stabilizer of infinity under the transitive action of the simple group of order 168 on the projective line $mathbb{P}^1(mathbb{F}_7)$. In this sense, we can view the simple group as the automorphism group of an oriented projective line, since it is the subgroup of $PGL_2(mathbb{F}_7)$ whose matrices have square determinant.



Unfortunately, I do not know a natural notion of orientation on an $mathbb{F}_2$-structure. I tried something involving torsors over $mathbb{F}_8^times$ and the Frobenius, but it became a mess.

differential equations - Christodoulou's paper on naked singularities in inhomogeneous dust collapse

Your question is very broad, and so I'll just give some very broad answers too.



Collapse scenarios Are you absolutely sure you want to restrict yourself to dust collapse? In the case of a spherically symmetric scalar field, there is also http://www.ams.org/mathscinet-getitem?mr=1307898 (and this paper which shows that naked singularities in the scalar field model are unstable).



There's some problem of the interpretation of the dust collapse as a physical formation. Here I'll quote Demetri from his book on vacuum collapse due to incoming gravitational waves




With the above remarks in mind the author turned to the study of the
gravitational collapse of an inhomogeneous dust ball. In this case, the
initial state is still spherically symmetric, but the density is a function of the
distance from the center of the ball. The corresponding spherically symmetric
solution had already been obtained in closed form by Tolman in 1934, in
comoving coordinates, but its causal structure had not been investigated. This
required integrating the equations for the radial null geodesics. A very different
picture from the one found by Oppenheimer and Snyder emerged from this
study. The initial density being assumed a decreasing function of the distance
from the center, so that the central density is higher than the mean density,
it was found that as long as the collapse proceeds from an initial state of low
compactness, the central density becomes infinite before a black hole has a
chance to form, thus invalidating the neglect of pressure and casting doubt on
the predictions of the model from this point on, in particular on the prediction
that a black hole eventually forms.




Essentially the same comment was made by Hájíček in his MathReviews on the dust collapse paper.



Modern re-writes Part of the reason that there are no modern re-writes of the proof is because of what I mentioned above, that the violation of weak cosmic censorship is unphysical (so it didn't attract that much attention). On the other hand, the basic idea behind the proof is not too hard (it is in the implementation of the analysis that is difficult). Simply speaking, in spherical symmetry, there is a lot of qualitative information that can be extracted without knowing too much details of the matter model involved (we must have matter as for vacuum, spherically symmetric space-times, we have Birkhoff's theorem). Perhaps a best modern reference is Mihalis Dafermos' paper in CQG. The most important part is that under symmetry conditions and some mild assumptions, future null infinity must be complete; in the spherically symmetric space-times, future null infinity is characterised by the area-radius $rtoinfty$ along out-going null geodesics. Thus the basic idea is to show that there exists out-going null geodesics, such that if you travel along the future direction the area radius increases without bound, and that if you travel along the past direction you will hit the singularity.



Now, in the case of the Tolman dust which was studied by Christodoulou, since the exterior of the dust cloud is glued to a Schwarzschild solution, there is a dichotomy: either the out-going null geodesic escapes the dust region before hitting the apparent horizon, or the null geodesic hits the apparent horizon first. In spherical symmetry, it is a general fact that the apparent horizon consists of space-like or expanding null portions. So once a null geodesic passes the apparent horizon it can no longer escape to infinity. On the other hand, inside the Schwarzschild region the apparent horizon agrees with the event horizon, so once the null geodesic escapes from dust without hitting the apparent horizon, it will remain in the domain of outer communications and escape to infinity.



So to demonstrate existence of naked singularity, it suffices to show that a null geodesic emanating from the first point of singularity (it doesn't make sense to consider later points, as they will no longer belong to the Cauchy development of an initial data set) will escape the dust cloud before hitting the apparent horizon. To do so one needs to estimate the size of the solution of an ODE. In spherical symmetry, the apparent horizon is characterised by $2m/r = 1$, where $m$ is the Hawking mass and $r$ is the area radius. Now the Hawking mass satisfies ordinary differential equations (with source) in spherical symmetry (see, for example, Equation 3 here; or see Section 3 of this paper.) So to estimate the size of the Hawking mass at the matching boundary, it is necessary to estimate the source terms in the ODE. This will lead to equation chasing using the Einstein equations and the matter-field equations. The rest is just being clever (in deciding in which order to estimate the various quantities) and doing the hard work of computation.



General techniques The reason there are no references for general techniques in testing whether a singularity is naked or not is very simply, there exists no such techniques. In spherical symmetry there is the simple description I posted above, but at the end of the day the estimates strongly depend on the structure of the matter equation (its separability and what not), so can only be really dealt with in a case-by-case basis. Of course, if you are given an explicit solution of the equations, testing whether the singularity is naked is often a simple computation re-writing the solution in some sort of null coordinates. The problem is that to prove genericity or to study singularities without reference to a explicit solution, one needs analytical estimates which, like I said, depends on which equations you are studying. In fact, if you could come up with a usable, generic method of testing whether a singularity is clothed, you would be half-way there resolving the general weak cosmic censorship conjecture.



Some last comments The general issue of weak cosmic censorship is a wide open one. The problem is that the statement contains the word "generic" in it (generic initial data sets lead to blah blah blah). So while there has been quite a lot of work going into constructing solutions and verifying that those solutions contain naked singularities, these works say nothing about weak cosmic censorship. (Explicit solutions tend to be non-generic in the space of solutions, except in the case you have strong rigidity theorems like Birkhoff's theorem for spherically symmetric Einstein-vacuum/electro-vacuum space-times or the No Hair Theorem for four-dimensional stationary axi-symmetric solutions.) The only real progress to weak cosmic censorship have all been due to Christodoulou (most of the physics papers are lacking in rigour). (Interestingly, there has been more developments in strong cosmic censorship, which, despite the name, has relatively little to do with weak cosmic censorship.)



Most recently, the focus in the community seems to be that the next model to consider for cosmic censorship should be the Einstein-Vlasov model (in spherical symmetry). (Well, it has been under consideration for around 10 years now and still prohibitingly hard.) For general solutions without symmetry assumptions, there has been essentially zero work in the field. There was an attempt to reformulate the conjecture into something mathematically tractable, but not has been done with the reformulation.

ho.history overview - Russell and Whitehead's types: ramified and unramified

Yes, this still occurs in modern type theory; in particular, you'll find it in the calculus of constructions employed by the Coq language.



Consider the type called Prop, whose inhabitants are logical propositions (which are in turn inhabited by proofs). The type Prop does not belong to Prop -- this means that Prop exhibits stratification:



Check Prop.
Prop
: Type


However, note that (forall a:Prop, a) does have type Prop. So although Prop does not belong to Prop, things which quantify over all of Prop may still belong to Prop. So we can be more specific and say that Prop exhibits unramified stratification.



Check (forall a:Prop, a).                                                       
forall a : Prop, a
: Prop


By contrast, consider Set, whose inhabitants are datatypes (which are in turn inhabited by computations and the results of computations). Set does not belong to itself, so it too exhibits stratification:



Check Set.
Set
: Type


Unlike the previous example, things which quantify over all of Set do not belong to Set. This means that Set exhibits ramified stratification.



Check (forall a:Set, a).
forall a : Set, a
: Type


So, in short, "ramification" in Russell's type hierarchy is embodied today by what Coq calls "predicative" types -- that is, all types except Prop. If you quantify over a type, the resulting term no longer inhabits that type unless the type was impreciative (and Prop is the only impredicative type).



The higher levels of the Coq universe (Type) are also ramified, but Coq hides the ramification indices from you unless you ask to see them:



Set Printing Universes.                                                         
Check (forall a:Type, Type).
Type (* Top.15 *) -> Type (* Top.16 *)
: Type (* max((Top.15)+1, (Top.16)+1) *)


Think of Top.15 as a variable, like $alpha_{15}$. Here, Coq is telling you that if you quantify over the $alpha_{15}^{th}$ universe to produce a result in the $alpha_{16}^{th}$ universe, the resulting term will fall in the $max(alpha_{15}+1, alpha_{16}+1)^{th}$ universe -- which is at least "one level up" from what you're quantifying over.



Just as it was later discovered that Russell's ramification was unnecessary (for logic), it turns out that predicativity is unnecessary for the purely logical portion of CiC (that is, Prop).

Friday, 18 September 2015

pr.probability - A probability exercise related to Central Limit Thm

The fact that $b_nto infty$ is quite easy to check: if not, there is a $M$ and a subsequence $(b_{n'})$ which remains below $M$, hence $frac 1{b_{n'}}max_{1leqslant jleqslant n'}|X_j|geqslant frac 1{M}max_{1leqslant jleqslant n'}|X_j|$, hence $max_{1leqslant jleqslant n'}|X_j|$ would converge to $0$ in probability, which is not possible.



Let us denote the convergence in distribution by $Rightarrow$.



Theorem 7.6 in Durrett's book Probability: theory and examples, (second edition), provides an useful theorem here:




The convergence of types theorem. Assume that $Y_nRightarrow Y$ and there are constants $alpha_n>0$, $beta_n$ such that $Y'_n=alpha_nY_n+beta_nRightarrow Y'$, where $Y$ and $Y'$ are not degenerate. Then there are $alpha>0$ and $betainBbb R$ such that $alpha_nto alpha$ and $beta_nto beta$.




The proof uses the fact that in case of convergence in distribution, the sequence of corresponding characteristic functions actually converges uniformly on compact sets. Then we proves that for $alpha$, there is an unique positive real number which can be a candidate, and the same for $beta_n$.



Here, we use this theorem with $W_n:=frac{S_n-a_n}{b_n}$, $alpha_n:=frac{b_n}{b_{n+1}}$ and $beta_n:=frac{a_n-a_{n+1}}{b_{n+1}}$. It works since $frac{X_{n+1}}{b_{n+1}}$ goes to $0$ in probability.



Finally, we have to use independence in order to identify the obtained limits. We compute the characteristic function of $W_n$, and of $W_{n+1}$, and we take the limits.

ag.algebraic geometry - Degrees of etale covers of stacks

This is probably pretty basic, but as I said before I'm just beginning my way in the language of stacks.

Say you have an etale cover X->Y of stacks (in the etale site). Is there a standard way to define the degree of this cover? Here's my intuition: if X and Y are schemes, we can look etale locally and then this cover is Yoneda-trivial in the sense of http://front.math.ucdavis.edu/0902.3464 , meaning that etale locally it is just a disjoint unions of (d many) pancakes. Can we do this generally? Is there some "connectivity" conditions on Y for this to work? Is there a different valid definition for degree of an etale cover of stacks?



Yoneda-Triviality



I figured since nobody answered so far, maybe I should write down what a possible Yoneda-triviality condition could mean for stacks:



Def: Call f:X->Y (stacks) Yoneda-trivial if there exists a set of sections of f, S, such that the natural map Y(Z)xS->X(Z) is an isomorphism (or maybe a bijection?) for any connected scheme Z.



But I'm still clinging to the hope that there's a completely different definition out there that I'm just not aware of.

ag.algebraic geometry - Real spectrum of ring of continuous semialgebraic functions

I don't agree with the preceding answer.



When $U$ is a locally compact semialgebraic set, then $widetilde{U}$ equipped with its sheaf of semi-algebraic continuous functions is isomorphic to the affine scheme $mathrm{Spec}(S^0(U))$. This is proposition 6 in Carral, Coste : Normal spectral spaces and their dimensions, J. Pure Appl. Algebra 30 (1983) 227-235. In particular $widetilde{U}$ is homeomorphic to the prime spectrum of $S^0(U)$, which is homeomorphic to its real spectrum. In case $U$ is not locally compact, the situation is different; there are more points in $mathrm{Spec}(S^0(U))$.

at.algebraic topology - Proof of the triangulation of a surface

Moise, "Geometric topology in dimensions $2$ and $3$" is perhaps what is closest to what you are looking for.



Let me mention three other references.



P. Buser, "Geometry and spectra of compact riemann surfaces" proves the following strengthening of the triangulation result:



Any compact Riemann suface of genus greater than two admits a trianugulation such that all trigons have sides of length less than log(4) and area between 0.19 and 1.36.



There is a book by Munkres, "elementary differentiable topology", that proves the more general fact that any smooth manifold is triangulable. It's long (it takes more or less the whole book) but it is really interesting.



Also the proof of the classification of smooth compact surfaces using Morse theory, in the book of Hirsch, "differential topology", is nice (but here again, it does more than just showing that compact surfaces are triangulable).

Thursday, 17 September 2015

fourier analysis - Can Walsh-Hadmard transform be used for convolution ?

Not in the sense I think you mean it. First of all, the Walsh-Hadamard transform is a Fourier transform - but on the group (Z/2Z)^n instead of on the group Z/NZ. That means you can use it to compute convolutions with respect to the space of functions (Z/2Z)^n -> C. Unfortunately, unlike the case with Z/NZ you can't use this to approximate a compactly supported convolution on Z, at least not directly.

ac.commutative algebra - When is the radical of the extension of a prime ideal prime?

If you look at the situation geometrically, you have a morphism of affine schemes $Spec(S)to Spec(R)$ and you are asking if the inverse image of the integral subscheme
$V(mathcal P) subset Spec(R)$ is irreducible.
My intuition is that this happens only rarely.



For example if $k$ is a field, consider the diagonal embedding $k to ktimes k$. You get an algebra which has all the properties you require:injectivity of structure map, integrality, finite presentation (after all it is a 2-dimensional vector space...) Yet the zero ideal of $k$ extends to the zero ideal in $k times k$ , which is reduced but not prime.



Still, in order to end on an optimistic note, here is a positive result. If you take for $S$ the polynomial ring $R[X_1,ldots ,X_n]$ over $R$, then the extension of any prime ideal of $R$ will remain prime. This corresponds geometrically to
the product of the subscheme $V(mathcal P)$ with affine space $mathbb A^n_R$.

Wednesday, 16 September 2015

elliptic curves - Zeros of the Weierstrass pe-function

This question was prompted by the post here, and I asked this earlier, deleted it, and due to pressure exerted by Ilya Nikokoshev, I am asking it again. Apologies to Pavel Etingof.



Q1. Let $Lambda$ be a lattice in $mathbb{C}$. We look at the behavior of the zero set of the Weierstrass $wp$-function for this lattice. By integration around a unit cell for the lattice, we see that the number of poles and zeros are the same. So there has to be two zeros. We position the fundamental domain to be symmetric about $0$, and from the expression for $wp$, we see that the zero should be $z$ and $-z$ in case they are distinct. Otherwise, it is a double zero, which is one of the $2$-torsion points.




Now, in the case that the zero is not a double zero, can anything be said about its location from the knowledge of $Lambda$?




Q2. This is stupid. . But, what is the degree of the branched covering $wp: mathbb{C}/Lambda rightarrow mathbb{P}^1(mathbb{C})$? I must confess that I am not good in this stuff.

computer algebra - Software for rigorous optimization of real polynomials

I am looking for software that can find a global minimum of a polynomial function in R^n over a polyhedral domain (given by some linear inequalities say). The number of variables n is not more than a dozen. I know it can be done in theory (by Tarski's elimination of quantifiers in real closed fields), and I know that the time complexity is awful. However if there is a decent implementation that can handle a dozen variables with a clean interface, it would be great. I have tried builtin implementations in Mathematica and Maple, and they do not appear terminate on 4-5 variable instances.



If the software can produce some kind of concise "certificate" of its answer, it would be even better, but I am not sure how such a certificate should look like even in theory.



Edit: Convergence to the optimum is nice, but what I am really looking for is ability to answer questions of the form "Is minimum equal to 5?" where 5 is what I believe on a priori grounds to be the answer to optimization to be (in particular, it is a rational number). That also explains why I want a certificate/proof of the inequality, or a counterexample if it is false.

Tuesday, 15 September 2015

ag.algebraic geometry - Why can the Dolbeault Operators be Realised as Lie Algebra Actions

These are the bells this rings, at least for the case of a Kähler manifold. It's perhaps not what you are thinking of, but it seems to provide an answer to the question in the title.



Among the many equivalent definitions of Kähler manifolds is one coming from physics, which says that




supersymmetry of a non-linear sigma model in four dimensions requires a Kähler target space.




Perhaps one should elaborate: a four-dimensional sigma model is described by an action functional for maps
$$X : mathbb{R}^{3,1} to M,$$
where $mathbb{R}^{3,1}$ is Minkowski space-time and $(M,g)$ is a riemannian manifold. The supersymmetric extension consists of adding fermionic fields $psi$, which are sections of $S otimes X^*TM$, where $S$ is a spinor bundle on $mathbb{R}^{3,1}$, in such a way that the resulting action is invariant under the Poincaré superalgebra.



This is a Lie superalgebra $mathfrak{p} = mathfrak{p}_0 oplus mathfrak{p}_1$, where $mathfrak{p}_0$ is the Poincaré algebra and $mathfrak{p}_1$ transforms as a spinor representation of the Lorentz subalgebra (with translations acting trivially). In four-dimensions, the Lorentz subalgebra is isomorphic to $mathfrak{so}(3,1)$.



There is a well-defined procedure of dimensional reduction by which you can take a four-dimensional action and construct a one-dimensional action by simply declaring that the fields do not depend on three of the Minkowski coordinates. If one does this to the above four-dimensional supersymmetric sigma model, one obtains a much studied supersymmetric quantum mechanical system. The canonical quantisation of this systems results in a Hilbert space which is isomorphic to the $L^2$ complex differential forms on $M$ and a Hamiltonian which is the Hodge laplacian.
Under this identification, the Dolbeault operators correspond to the "supercharges" in the Poincaré superalgebra; i.e., the action of $mathfrak{p}_1$.



Further the dimensional reduction (which has to choose a direction) breaks the Lorentz symmetry to an $mathfrak{so}(2,1)$ subalgebra whose action on the Hilbert space corresponds, under the above identification, to the action of the Hodge-Lefschetz operators $L,Lambda,H$. The Hodge identities are what is left of the Poincaré supersymmetry after dimensional reduction. The action of $mathfrak{sl}(2)$ on the cohomology of a Kähler manifold is nothing else but the action of the residual Lorentz symmetry on the ground states of the quantum-mechanical system.



If I may be forgiven for pointing to my own work, this is explained in some detail in Supersymmetry and the cohomology of (hyper)Kähler manifolds written with Bill Spence and Chris Köhl back in 1997. (There we also treat the case of hyperkähler manifolds, which are the targets of six-dimensional supersymmetric sigma-models. Under dimensional reduction to one dimension, there is a similar story with a residual Lorentz symmetry which acts on the cohomology of a hyperkähler manifold, "explaining" an earlier result of Misha Verbitsky's.)

Monday, 14 September 2015

mg.metric geometry - Best fit for multiple shapes inside an area

Where is everybody? Only one answer, which hasn't received any upvotes, nor any downvotes. I don't know any way to put this problem back on the radar screen other than posting an answer to it, so here's an extended version of the answer I already posted.



Garey and Johnson, Computers and Intractability, has a list of problems that are known to be NP-complete. Subset Sum is problem SP13 in that list, on page 223 of the book. I quote:



Instance: Finite set $A$, size $s(a)in{bf Z}^+$ for each $ain A$, positive integer $B$.



Question: Is there a set $A'subseteq A$ such that the sum of the sizes of the elements in $A'$ is exactly $B$?



Now let's look at a special case of the current MO question. Suppose that the multiple shapes are rectangles with base 1 and various integer heights, and the (big) rectangular area also has base 1 and integer height B. Suppose that there is an efficient way to determine the best fit of the shapes in the big rectangle (I am reinterpreting the original request for a "formula" as a request for an efficient method). Then you have an efficient way to solve Subset Sum. Namely, apply the alleged efficient best fit method to the data. If it finds a way to completely fill the big rectangle, then it has found a subset summing to $B$, and if it doesn't find a way to completely fill the big rectangle, then it has proved that there is no subset summing to $B$.



It follows that there is no efficient method for finding a best fit in this special case of the original problem, unless P = NP.

Sunday, 13 September 2015

lo.logic - nontrivial theorems with trivial proofs

I like the Connectedness argument, which follows straight from the axioms of a topology. A topological space
$left(mathbf{X},,mathcal{T}right)$ is connected iff $mathbf{X}$ and $emptyset$ are the only members of $mathcal{T}$
which are both open and closed at once. $mathbf{A} subset mathbf{X}$ is both open and closed iff its complement
$mathbf{X} sim mathbf{A}$ is also both open and closed, thus
$mathbf{X} = mathbf{A} bigcup left(mathbf{X} sim mathbf{A}right)$ is not a union of disjoint open sets
iff either $mathbf{A} = emptyset$ or $mathbf{A} = mathbf{X}$.



It is main idea in "the" (I don't know of any others) proof that a connected topological group
$left(mathfrak{G},,bulletright)$ is generated by any neighbourhood $mathbf{N}$ of the group's identity
$e$, i.e. $mathfrak{G} = bigcuplimits_{k=1}^infty mathbf{N}^k$. Intuitively: you can't have a valid
"neighbourhood" in the connected topological space without its containing "enough inverses" of its members to
generate the whole group in this way.



For completeness, the proof runs: We consider the entity $mathbf{Y} = bigcuplimits_{k=1}^infty mathbf{N}^k$.
For any $gamma in mathbf{Y}$ the map $f_{gamma} : mathbf{Y} to mathbf{Y}; f_{gamma}(x) = gamma^{-1} x$ is
continuous, thus $f_{gamma}^{-1}left(mathbf{N}right) = gamma , mathbf{N}$ contains an open neighbourhood
$mathbf{O}_{gamma} subseteq mathbf{N}$ of $gamma$, thus
$mathbf{Z} = bigcuplimits_{gamma in mathbf{Y}} mathbf{O}_{gamma}$ is open. Certainly
$mathbf{Y} subseteq mathbf{Z}$, but, since $mathbf{Y}$ is the collection of all products of a finite number of
members of $mathbf{N}$, we have $mathbf{Z} subseteq mathbf{Y}$, thus $mathbf{Z} = mathbf{Y}$ is open. If we
repeat the above reasoning for members of the set $mathbf{X} sim mathbf{Y}$, we find that the complement of
$mathbf{Y}$ is also open, thus $mathbf{Y}$, being both open and closed, must be the whole (connected) space $mathfrak{G}$.



The above is one of my favourite proofs of all time, up there in my favourite thoughts with Beethoven's ninth and
Bangles "Walk Like an Egyptian" (or anything by Captain Sensible) and it all hinges on the connectedness argument. It
is extremely simple, (not trivial, so it itself doesn't count for the Wiki, sadly) and its result unexpected
and interesting: you can't define a neighbourhood without including enough inverses. This is an example of
"homogeneity" at work: throwing the group axioms into another set of axioms makes a strong brew and tends to be
the mathematical analogue of turfing a kilogram chunk of native sodium into a bucket of water: the group operation
tends to clone structure throughout the whole space, thus not many axiom systems can withstand this assault by this
cloning process and be consistent. When all the bubbling, fizzing, toiling and trouble is over, only very special systems can be
left, thus all kinds of unforeseen results are forced by homogeneity, and the above is a very excitingly typical one.

Saturday, 12 September 2015

soft question - What are the most fundamental classes of mathematical algorithms?

Mathematical means either useful for Mathematics or based on Mathematics.



I guess one has to include the following items:



(1) Euclid and LLL



(2) Newton and variations based on the existence of attracting fixpoints for maps.



(3) Algorithms for linear algebra and, in particular, FFT



(4) Simplex algorithm and other algorithms using convexity properties



(5) Quadrature formula (integration, differential equations etc.)



(6) Factorization (integers, polynomials etc.) and primality proving



(7) Algorithms for computations with Groebner bases.



(8) Sorting and searching



(9) WZ and similar algorithms yielding automated proofs



This list is surely a strict subset of a satisfying answer. Which important (classes of) algorithms are missing? (I guess there must be also be something in probability and perhaps in geometry.)



Let me also specify that I would like the list to contain general-purpose algorithms, not algorithms for specific tasks, like encryption, error-correction, computations of class numbers for number fields etc. I guess, algorithms for elliptic curves are on the boundary and perhaps already slightly outside the list (many of them are however used for integer-factorization and
are thus implicitely contained in item (6) of the previous list).



A last question: Are we missing important fundamental algorithms? (This is of course tricky,
since we are probably not aware of their potential usefulness). By this I do not mean a algorithm for factorizing integers in polynomial time or an (impossible) algorithm deciding the existence of solutions for arbitrary Diophantine equations but algorithms useful for a large class of problems for which we have presently only case by case tricks.ge

real analysis - Hanner's inequalities: the intuition behind them

First, a probably unsatisfactory answer to your first question. I believe that Hanner proved his inequalities because he was investigating the modulus of convexity of $L^p$. So possibly he formed a guess on whatever basis, saw that guess would be correct if what have come to be known as Hanner's inequalities were true, then found that he could prove the inequalities. (I haven't actually looked at Hanner's paper so I might be wrong about the history, and in any case this is the kind of thought process that is seldom recorded in papers.)



A better answer is that Hanner's inequalities generalize the parallelogram identity in a very natural way.



As for your second question, I don't know that this really fits what you're asking for either, but this paper contains the most natural-looking proof of Hanner's inequalities that I've seen.



Added: On further reflection, the best answer to the first question combines my first two paragraphs above. The parallelogram identity very precisely expresses the uniform convexity of $L^2$. So in investigating the modulus of convexity of $L^p$, it is natural to look for some inequality that generalizes the parallelogram identity to $L^p$.

set theory - Ultrafilters vs Well-orderings

Historically it was proved that the ultrafilter lemma is independent from the axiom of choice by showing that there is a model in which there is an infinite Dedekind-finite set of real numbers, but every filter can be extended to an ultrafilter. Where an infinite Dedekind-finite set is an infinite set which does not have a countably infinite subset.



The existence of infinite Dedekind-finite sets negates not only the axiom of choice, but also the [much] weaker axiom of countable choice. These sets cannot be well-ordered, and since the real numbers have such subset they cannot be well-ordered themselves in such model.



The proof was given by Halpern and Levy in 1964.

Reference request for category theory works which quickly prove the theorem which generalises the 1st isomorphism theorem for groups/rings/...

If you want a categorical proof that encompasses nonabelian groups and rings-without-identity, then you need to work with concepts such as "normal subobject" that are (I think) not really part of the category theory that most mathematicians routinely encounter. You may also find that a proof is not particularly enlightening at this level of generality. I think this might explain why a simultaneous generalization does not appear early in textbooks.



I'll consider the formulation of the first isomorphism theorem which states that for any homomorphism $f: A to B$, $ker(f): K to A$ is a normal subobject, and the image of $f$ in $B$ is isomorphic to the quotient of $A$ by $K$. The formalism is roughly as follows: We work in a category in which every morphism has a kernel and a cokernel. Then any morphism $f: A to B$ factors into a composition of $coker(ker(f)): A to A/K$, $m: A/K to N$, and $ker(coker(f)): N to B$, and the diagram is defined up to unique isomorphism by the universal properties of kernel and cokernel. For the first part of the theorem, one typically defines normal subobject to be a kernel (although there are alternative definitions involving congruences), so we reduce to the question of the image. Here we have a conflict of terminology, since the "image" is often defined in categories as the kernel of the cokernel, and the first isomorphism theorem interprets image set-theoretically. Given this alternative interpretation, the theorem amounts to the assertion that $m$ and $ker(coker(f))$ are monomorphisms. Here we encounter another problem, since as far as I know, $m$ does not need to be a monomorphism in general.



At this point, we need to check that the categories we like have kernels and cokernels. The kernel of a group homomorphism $f:A to B$ is (the inclusion of) the preimage of the identity, and the cokernel of $f$ is given by taking the normal hull $N$ of $f(A)$ in $B$, and taking the quotient group $B/N$. We get our theorem because the maps of $f(A)$ into $N$ and $N$ into $B$ are inclusions, hence monomorphisms. A similar argument works for rings-without-identity. For abelian groups and vector spaces, the map $m: f(A) to N$ is an isomorphism. For fields, there are no kernels, so the first isomorphism theorem doesn't make sense.

Friday, 11 September 2015

ac.commutative algebra - Z_p flatness and irreducible components.

I just used the following.



Lemma. Let $A$ be a $mathbb{Z}_p$-flat ring, of finite type over $mathbb{Z}_p$, and suppose that $A otimes mathbb{F}_p$ is a domain. Then $A$ is a domain.



Proof: Suppose $ab = 0$ in $A$. Then one of $a, b$ must lie in $pA$, so we can write (without loss of generality) $a = p a_1$. Then by flatness $a_1 b=0$.



Continuing in this manner, we find that one of $a$ and $b$ must be infinitely divisible by $p$. But the finite type hypothesis implies that this is impossible unless one of $a$ or $b$ is in fact zero.



Given the statement, it seems like there should be a more conceptual reason why this should be true. Can anyone supply one? (A proof using more general facts in EGA counts as conceptual).



Edit: Kevin Buzzard gives a compelling reason why I have never seen this "fact" used before. Thank you both for your answers.



Edit 2: I suppose that replacing "finite type" with "p in the radical" would work (with an application of the Krull intersection theorem). In particular, the result is true as stated for a local $mathbb{Z}_p$-algebra.

ag.algebraic geometry - what mistakes did the Italian algebraic geometers actually make?

[Added disclaimer: What follows is the product of probably faulty memory combined with a
limited understanding in the first place, so should be taken with a grain of salt.]



Dear Kevin,



I believe that Brill--Noether of curves gives the kind of examples you are looking for. (My understanding, probably imperfect if not completely wrong, is that they made certain general position arguments about existence of linear systems that were just wrong, because they didn't realize that certain kinds of geometric condition were universal, and so, although they look special, are in fact general.)



You might try looking at the old papers of Harris (or maybe Eisenbud and Harris) about linear systems on curves.



Also, the introduction (by Zariski) to Zariski's collected works is interesting. He began
in the Italian school, but then became instrumental in introducing algebraic tools.



Also, I think that the newest edition of his book on algebraic surfaces (a report on the results of the Italian school) has annotations by Mumford, which are very illuminating with regard to the differences and similarities between the Italian style and a more modern style.



P.S. Here's a way to imagine the kind of error one could make in general position arguments (although obviously any actual such error made by the Italians would be many times more subtle): Let $P_1,ldots,P_8$ be eight points. Choose two elliptic curve $E_1$ and $E_2$ passing through the 8 points, and now try to choose them in general position (with respect to
the property of containing the 8 points) so that the 9th point of intersection is in general position with regard to the $P_i$. This might seem plausibly possible if you don't think it through, but of course is in fact impossible, because the 8 given points uniquely determine the 9th one. (The possible $E_i$ lie in a pencil.) My impression is that the Italians made errors of that sort, but in much more subtle contexts.