A basic principle of first order logic is that, if a theorem follows from a list of axioms, then it follows from some finite subset of that list. This can often give nontrivial consequences. For example:
(1) If $T$ is any true first order statement about characteristic zero fields, then there is a constant $N$ so that $T$ is true for fields of characteristic $>N$.
Proof: take your list of axioms to be the standard field axioms plus, in case (1), the axiom $1+1+1+cdots+1 neq 0$ where the sum is over $p$ ones for each prime $p$.
A similar example is:
(2) If $T$ is any true first order statement about algebraically closed fields, there is a constant $N$ such that $T$ is true for any field $K$ in which every polynomial of degree $leq N$ has a root.
In both of these cases, it can be easier to (informally) prove a result about characteristic zero/algebraically closed fields than it is to extract the constant $N$.
Here is a different example. Recall that Ramsey's theorem (in a special case) says:
(3) For any positive integer $N$, there is an integer $M$ such that, for any bicoloring of the complete graph on $M$ vertices, there is a monochromatic complete subgraph on $N$ vertices.
As I understand it, the original proof was to show
(3') For any bicoloring of the complete graph on infinitely many vertices, there is an monochromatic infinite complete subgraph.
This clearly implies
(3'') For any positive integer $N$, and any bicoloring of the complete graph on infinitely many vertices, there is an monochromatic complete subgraph on $N$ vertices.
Writing (3'') formally, you wind up with an infinite sequence of axioms, saying that the graph has more than $k$ vertices for every $k$. Since only finitely many of those axioms are used in the proof of (3''), we see that (3) must hold. Of course, there are now direct proofs of (3), but my understanding is that the logic theory proof came first.
No comments:
Post a Comment