From talking to statisticians, it seems like the standard thing to do is assume a yes. For a specified sequence or thinning, one can create Markov chains which exhibit 0 autocorrelation but a 'very large' amount of dependence despite the thinning, but the idea is that this should be pretty pathological, and so you are probably 'pretty safe'. The examples that come to mind for me are chains like '$X_{t}$ is iid 0-1 for t not divisible by a million, and equal to the sum of the previous 999,999 $X_{s}$ mod 2 for t divisible by a million'
So, you won't get any sort of theoretical justification without telling us something about your chain, because there really are bad chains out there, where what you want fails badly... but in the real world, you can probably wave your hands. Note that just increasing your thinning isn't enough to get rid of this - for any finite amount of thinning, there are bad chains which horribly violate independence despite having 0 covariance.
There is a big world of convergence diagnostics out there. Gelman-Rubin, as pointed out, is a standard. Some other standards include the Geweke test, Mykland-Yu's Cusum test, Raftery-Lewis, and Heidelberg-Welch. There is a lot of literature on each of these, but with the slight exception of some easy calculations in Mykland-Yu, it is all extremely handwavey.
No comments:
Post a Comment