Monday, 29 June 2015

Lower bound on the convergence rate of a specific Markov chain

I have a Markov chain $mathbf{A} = (A_0, A_1, ldots)$ with state space ${0, ldots, n}$ which converges towards a stationary distribution $pi$. There are a lot of well-known results on upper-bounding the convergence rate. However, I'm interested in getting a lower bound.




In detail, the problem looks like this: The transition probability is given as



$p_{ij} = {n choose i}left(1-({1over 2})^jright)^ileft(({1over 2})^jright)^{n-i}$ for $jneq 0$,



$p_{ij} = {n choose i}left(1-({1over 2})^nright)^ileft(({1over 2})^nright)^{n-i}$ for $j = 0$,



and the initial distribution $A_0$ is $(1, 0, ldots, 0)^T$, i.e., the chain starts in state $0$ with probability $1$.



Given this information, is it possible to derive a lower bound on the convergence rate? Since I'm particularly interested in state 0, I would like to come up with something like this:



$lvert mathbb{P}(A_k = 0) - pi_0 rvert geq ldots$ some function of $k$ and $n$.



Any hints on how to approach this would be appreciated. Please also speak up if you think that it is unlikely that such a closed-form expression exists and I should stop wasting my time on this problem. I'm a computer scientist, not a mathematician, so it's quite possible that I've overlooked something obvious.

No comments:

Post a Comment