Let $omega(n)$ denote the number of distinct prime factors of $n$ and $Phi(cdot)$ denote the Normal distribution with mean $0$ and variance $1$. Then, uniformly in $t$, the number of integers $n leq x$ with $omega(n) leq loglog n + t sqrt{loglog n}$ is $$Phi(t) + Oleft(frac{1}{sqrt{loglog x}}right)$$ as $x rightarrow infty$. The error term is sharp.
This particular error term was conjectured to hold by Leveque in the 40's. His conjecture was settled a few years later by Erdos and Renyi.
Before Erdos and Renyi's paper the best error term was $O(logloglog x / sqrt{loglog x})$ and was due (if I recall correctly) to Kubilius. Kubilius's method was in its origin probabilistic and relied in an essential way on truncating the "random variable" $omega(n)$. This introduced an additional factor of $logloglog x$ to the error term, as inevitably, truncation leads to loss of information.
In contrast, Renyi's and Erdos's method was purely analytic: the idea was to estimate $sum_{n leq x} exp(text{i}t omega(n))$ uniformly in $t$ in a certain range, and extract the desired conclusion from the behavior of this sum. To this end, they apply Berry-Esseen's theorem, but in principle one could use an more hands-on approach: smooth the indicator function of $omega(n) leq loglog n + t sqrt{loglog n}$ and express it in terms of a variant of Perron's formula, after summing the resulting expression over $n leq x$ one could proceed with the saddle-point method.
However when purely analytic methods are not directly accessible, Kubilius's method is the canonical method. For example suppose that you want to investigate the distribution of $omega(n)$ over a peculiar subset of $[1,x]$ on which sieve methods -- but not "heavy" analytic methods -- are applicable, then Kubilius's method is still your best bet.
(As far as terminology is concerned it's "Kubilius model's" rather than "Kubilius's method"; more details about "Kubilius's model" can be found in Volume 1 of Elliott's "Probabilistic Number Theory").
No comments:
Post a Comment