Probably you are interested in an answer coming from measure theory or probability, and I would be interested in such an answer also, but let me give you an answer from set theory, which I believe does in fact offer a precise answer, with something like the sense that I believe you intend in your question.
In set theory, the method of forcing is fundamentally about different kinds of randomness or genericity. Specifically, in order to carry out a forcing argument, we first must specify the notion of forcing or genericity that will be used. This is done by specifying a certain partial order or Boolean algebra, whose natural topology will be used to determine a notion of dense sets. Generic objects will be those objects that are in many dense sets. In the extreme case in which forcing is applied, one has a model of set theory V, and considers adjoining an object G that is V-generic, in the sense that G is a filter containing elements from every dense set in V. The generic extension V[G] is a new model of set theory containing this new generic object G. The overall construction is something like a field extension, because V sits inside V[G], which is the smallest model of ZFC containing V and G, and everything in V[G] is constructible algebraically from G and objects in V. Forcing was first used to prove the consistency of ZFC with the negation of the Continuum Hypothesis, and the notion of forcing used had the effect of adding ω2 many new real numbers. The generic object was a list of ω2 many new reals, forcing CH to fail.
The point is that in order to control the nature of the forcing extension V[G], one must carefully control the forcing notion, that is, the notion of randomness that will be used to build V[G].
It turns out that many of the most fruitful forcing notions involve a notion of genericity or randomness on the reals. For example, with Cohen forcing, a V-generic real is one that is a member of every co-meager set in V. A random real is a member of every Lebesgue measure one set in V. There are dozens of different forcing notions (see the list of forcing notions), and these have been proved to be fundamentally different, in the sense that a generic filter for one of these is never generic with respect to another. With forcing, we can add Cohen reals, random reals, Mathias reals, Laver reals, dominating reals and so on, and these notions have no generic instances in common.
There isn't any probability space here to speak of, and being V-generic for a particular notion of forcing is not equivalent to any probabilistic property. Rather, one is tailoring the notion of forcing to describe a certain kind of randomness or genericity that is then used to build the forcing extension. Most of the detailed care in a forcing argument is about choosing the forcing notion and making sure that it works as desired.
Thus, since each notion of forcing corresponds fundamentally to a notion of genericity or randomness, I take the proofs that the different notions of forcing are different as an answer to your question.
No comments:
Post a Comment