If $A$ is your target covariance matrix and $LL^T = A$, and $x = (x_1, ldots, x_n)$ is a vector of independent random variables with mean zero and variance 1, then $y = Lx$ has the required covariance. Here $L$ is a matrix and $L^T$ is its transpose. $L$ can just be the Cholesky factor of $A$. ((Check: $mathrm{cov}(y) = E[yy^T] = E[(Lx)(Lx)^T] = E[Lxx^TL^T] = LE[xx^T]L^T$ (by linearity of expectation) $= Lmathrm{cov}(x)L^T = LIL^T = LL^T = A$. $mathrm{cov}(y) = E[yy^T]$ because $y$ has mean 0, and likewise for $mathrm{cov}(x)$.)
That's not too far from a "complete" solution, actually. If you start with a vector $y$ of random variables with mean zero and covariance matrix $A$, then if $A = LL^T$ and $x = L^{-1}y$, then $mathrm{cov}(x) = I$. That doesn't necessarily imply that the components of $x$ are independent; it means they are uncorrelated. So the most general construction is to begin with a vector $x$ of uncorrelated random variables with mean zero and variance 1 and let $y=Lx$. (I only mean that every example can theoretically be obtained that way, not that it's necessarily the best or most computationally efficient way to do it.)
No comments:
Post a Comment