Thursday, 16 November 2006

oc.optimization control - Convergence to a (unique?) fixed point?

Consider a given $Ntimes P$ matrix $X$ (full rank with columns ${bf x}_p$, $p=1,ldots,P$), a given vector ${bf y}in R^N$ and a thresholding function $eta_lambda(|x|)=(|x|-lambda)_+$ with $lambda>0$.



Start with a vector ${boldsymbol alpha in R^P}$, define ${bf r}_{-p}({boldsymbol alpha})={bf y}-X {boldsymbol alpha}+ alpha_p {bf x}_p$, and define a sequence of vectors that changes only one entries to the current iterate, say the $p$th one, to
$$
alpha_p^{rm new} =frac{{bf r}_{-p}({boldsymbol alpha}^{rm old})^{rm T} {bf x}_p}{ |{bf x}_p |_2^2 } left {frac{eta_{lambda}(|{bf r}_{-p}({boldsymbol alpha}^{rm old})^{rm T} {bf x}_p|)}{ |{bf r}_{-p}({boldsymbol alpha}^{rm old})^{rm T} {bf x}_p|}right }^gamma.
$$
Repeat with a cycling rule, successively letting $p=1,2,ldots, P, 1,2 ldots$



Note that the case $gamma=0$ amounts to solving the least squares problem
$$
min_{boldsymbol alpha in R^P} | {bf y}- X {boldsymbol alpha}|_2^2,
$$
by a coordinate descent algorithm; also the case $gamma=1$ amounts to solving the $ell_1$-penalized least squares problem
$$
min_{boldsymbol alpha in R^P} frac{1}{2}| {bf y}- X {boldsymbol alpha}|_2^2 + lambda | {boldsymbol alpha}|_1,
$$
by a coordinate descent algorithm. Fact: both algorithms converge to a unique point: the minimum of its corresponding optimization problem.



Questions: (1) For a given starting vector ${boldsymbol alpha}$, does the sequence converge to a fixed point for all $gamma$'s? (2) If so, is the fixed point unique (regardless of the starting vector)?

No comments:

Post a Comment