In Freedman's series of 3 books on Markov processes, I find that I keep on running into terms like:
P[$max_{0 leq s leq 1, s leq t leq rs}$ | B(t) - B(s) | > $epsilon$]
in the background of proofs I'm reading. As Freedman mentions in B+D (19), its easy to see that for all fixed $epsilon > 0$ this goes to 0 as r goes to 1. Its not so easy for me to see how to get any even somewhat-reasonable bounds on this probability for fixed $epsilon$ and $r$. Does anybody have any suggestions (or out-of-the-box theorems) for some bounds?
PS: I've tried only a few silly things - e.g. trying to approximate the probability for finite sums and use Donsker's principle, and asking some people around here about martingale tricks, but no luck so far.
EDIT: A clarification in response to the first answer. I know that this probability (call it p(r, $epsilon$) ) goes to 0 as r goes to 1 for ever fixed $epsilon$, but I'm interested in statements like p(0.04, 0.000000001) < 0.02, or (more optimistically) p(r, $epsilon) < 500 frac{log(log(r-1))}{epsilon}$. I don't have any reason to believe those two bounds are true (though both seem plausible to me), they are merely illustrative.
No comments:
Post a Comment