The classical approach is to build a Neyman-Pearson style hypothesis test (warning: incredibly ugly mathematics, in desperate need of replacement, but ubiquitous).
Say you rolled your die $N$ times to produce $X$. Let the multinomial distribution have parameters $(p_1, p_2, ..., p_6)$, where $sum_i p_i = 1$. Then construct a one dimensional measure such as $Q = | X/N - p |$, using your favorite $p$-norm. Calculate the probability distribution of $Q$.
Your null hypothesis in this case is $p_i = frac{1}{6}$ for all $i$. For a test of level of significance $alpha$ (conventionally 0.05 or 0.01), there is a region $[a,b]$ such that $int_a^b p(Q = x) dx = 1 - alpha$. Actually, there are many such, and there are other criteria to choose among them. In your case, invariance might be a good one: you expect the whole problem to be symmetric if you let $Q$ go to $-Q$, in which case the interval should be symmetric about 0, i.e., $[-a,a]$.
For a given value of $Q$ from your data, you do the integral over $[-Q,Q]$ and get $1 - alpha$. That $alpha$ is the lowest level of significance at which the observed data will be significant.
As I said, classical hypothesis testing is a very ugly theory. There are other approaches, such as minimax tests which you can construct via Bayes priors, since the set of all Bayes priors contains but is usually not much larger than the set of all admissible statistical procedures.
No comments:
Post a Comment