Proximal gradient methods for learning

From HandWiki - Reading time: 10 min

Proximal gradient (forward backward splitting) methods for learning is an area of research in optimization and statistical learning theory which studies algorithms for a general class of convex regularization problems where the regularization penalty may not be differentiable. One such example is [math]\displaystyle{ \ell_1 }[/math] regularization (also known as Lasso) of the form

[math]\displaystyle{ \min_{w\in\mathbb{R}^d} \frac{1}{n}\sum_{i=1}^n (y_i- \langle w,x_i\rangle)^2+ \lambda \|w\|_1, \quad \text{ where } x_i\in \mathbb{R}^d\text{ and } y_i\in\mathbb{R}. }[/math]

Proximal gradient methods offer a general framework for solving regularization problems from statistical learning theory with penalties that are tailored to a specific problem application.[1][2] Such customized penalties can help to induce certain structure in problem solutions, such as sparsity (in the case of lasso) or group structure (in the case of group lasso).

Relevant background

Proximal gradient methods are applicable in a wide variety of scenarios for solving convex optimization problems of the form

[math]\displaystyle{ \min_{x\in \mathcal{H}} F(x)+R(x), }[/math]

where [math]\displaystyle{ F }[/math] is convex and differentiable with Lipschitz continuous gradient, [math]\displaystyle{ R }[/math] is a convex, lower semicontinuous function which is possibly nondifferentiable, and [math]\displaystyle{ \mathcal{H} }[/math] is some set, typically a Hilbert space. The usual criterion of [math]\displaystyle{ x }[/math] minimizes [math]\displaystyle{ F(x)+R(x) }[/math] if and only if [math]\displaystyle{ \nabla (F+R)(x) = 0 }[/math] in the convex, differentiable setting is now replaced by

[math]\displaystyle{ 0\in \partial (F+R)(x), }[/math]

where [math]\displaystyle{ \partial \varphi }[/math] denotes the subdifferential of a real-valued, convex function [math]\displaystyle{ \varphi }[/math].

Given a convex function [math]\displaystyle{ \varphi:\mathcal{H} \to \mathbb{R} }[/math] an important operator to consider is its proximal operator [math]\displaystyle{ \operatorname{prox}_{\varphi}:\mathcal{H}\to\mathcal{H} }[/math] defined by

[math]\displaystyle{ \operatorname{prox}_{\varphi}(u) = \operatorname{arg}\min_{x\in\mathcal{H}} \varphi(x)+\frac{1}{2}\|u-x\|_2^2, }[/math]

which is well-defined because of the strict convexity of the [math]\displaystyle{ \ell_2 }[/math] norm. The proximal operator can be seen as a generalization of a projection.[1][3][4] We see that the proximity operator is important because [math]\displaystyle{ x^* }[/math] is a minimizer to the problem [math]\displaystyle{ \min_{x\in\mathcal{H}} F(x)+R(x) }[/math] if and only if

[math]\displaystyle{ x^* = \operatorname{prox}_{\gamma R}\left(x^*-\gamma\nabla F(x^*)\right), }[/math] where [math]\displaystyle{ \gamma\gt 0 }[/math] is any positive real number.[1]

Moreau decomposition

One important technique related to proximal gradient methods is the Moreau decomposition, which decomposes the identity operator as the sum of two proximity operators.[1] Namely, let [math]\displaystyle{ \varphi:\mathcal{X}\to\mathbb{R} }[/math] be a lower semicontinuous, convex function on a vector space [math]\displaystyle{ \mathcal{X} }[/math]. We define its Fenchel conjugate [math]\displaystyle{ \varphi^*:\mathcal{X}\to\mathbb{R} }[/math] to be the function

[math]\displaystyle{ \varphi^*(u) := \sup_{x\in\mathcal{X}} \langle x,u\rangle - \varphi(x). }[/math]

The general form of Moreau's decomposition states that for any [math]\displaystyle{ x\in\mathcal{X} }[/math] and any [math]\displaystyle{ \gamma\gt 0 }[/math] that

[math]\displaystyle{ x = \operatorname{prox}_{\gamma \varphi}(x) + \gamma\operatorname{prox}_{\varphi^*/\gamma}(x/\gamma), }[/math]

which for [math]\displaystyle{ \gamma=1 }[/math] implies that [math]\displaystyle{ x = \operatorname{prox}_{\varphi}(x)+\operatorname{prox}_{\varphi^*}(x) }[/math].[1][3] The Moreau decomposition can be seen to be a generalization of the usual orthogonal decomposition of a vector space, analogous with the fact that proximity operators are generalizations of projections.[1]

In certain situations it may be easier to compute the proximity operator for the conjugate [math]\displaystyle{ \varphi^* }[/math] instead of the function [math]\displaystyle{ \varphi }[/math], and therefore the Moreau decomposition can be applied. This is the case for group lasso.

Lasso regularization

Consider the regularized empirical risk minimization problem with square loss and with the [math]\displaystyle{ \ell 1 }[/math] norm as the regularization penalty:

[math]\displaystyle{ \min_{w\in\mathbb{R}^d} \frac{1}{n}\sum_{i=1}^n (y_i- \langle w,x_i\rangle)^2+ \lambda \|w\|_1, }[/math]

where [math]\displaystyle{ x_i\in \mathbb{R}^d\text{ and } y_i\in\mathbb{R}. }[/math] The [math]\displaystyle{ \ell_1 }[/math] regularization problem is sometimes referred to as lasso (least absolute shrinkage and selection operator).[5] Such [math]\displaystyle{ \ell_1 }[/math] regularization problems are interesting because they induce sparse solutions, that is, solutions [math]\displaystyle{ w }[/math] to the minimization problem have relatively few nonzero components. Lasso can be seen to be a convex relaxation of the non-convex problem

[math]\displaystyle{ \min_{w\in\mathbb{R}^d} \frac{1}{n}\sum_{i=1}^n (y_i- \langle w,x_i\rangle)^2+ \lambda \|w\|_0, }[/math]

where [math]\displaystyle{ \|w\|_0 }[/math] denotes the [math]\displaystyle{ \ell_0 }[/math] "norm", which is the number of nonzero entries of the vector [math]\displaystyle{ w }[/math]. Sparse solutions are of particular interest in learning theory for interpretability of results: a sparse solution can identify a small number of important factors.[5]

Solving for L1 proximity operator

For simplicity we restrict our attention to the problem where [math]\displaystyle{ \lambda=1 }[/math]. To solve the problem

[math]\displaystyle{ \min_{w\in\mathbb{R}^d} \frac{1}{n}\sum_{i=1}^n (y_i- \langle w,x_i\rangle)^2+ \|w\|_1, }[/math]

we consider our objective function in two parts: a convex, differentiable term [math]\displaystyle{ F(w) = \frac{1}{n}\sum_{i=1}^n (y_i- \langle w,x_i\rangle)^2 }[/math] and a convex function [math]\displaystyle{ R(w) = \|w\|_1 }[/math]. Note that [math]\displaystyle{ R }[/math] is not strictly convex.

Let us compute the proximity operator for [math]\displaystyle{ R(w) }[/math]. First we find an alternative characterization of the proximity operator [math]\displaystyle{ \operatorname{prox}_{R}(x) }[/math] as follows:

[math]\displaystyle{ \begin{align} u = \operatorname{prox}_R(x) \iff & 0\in \partial \left(R(u)+\frac{1}{2}\|u-x\|_2^2\right)\\ \iff & 0\in \partial R(u) + u-x\\ \iff & x-u\in \partial R(u). \end{align} }[/math]

For [math]\displaystyle{ R(w) = \|w\|_1 }[/math] it is easy to compute [math]\displaystyle{ \partial R(w) }[/math]: the [math]\displaystyle{ i }[/math]th entry of [math]\displaystyle{ \partial R(w) }[/math] is precisely

[math]\displaystyle{ \partial |w_i| = \begin{cases} 1,&w_i\gt 0\\ -1,&w_i\lt 0\\ \left[-1,1\right],&w_i = 0. \end{cases} }[/math]

Using the recharacterization of the proximity operator given above, for the choice of [math]\displaystyle{ R(w) = \|w\|_1 }[/math] and [math]\displaystyle{ \gamma\gt 0 }[/math] we have that [math]\displaystyle{ \operatorname{prox}_{\gamma R}(x) }[/math] is defined entrywise by

[math]\displaystyle{ \left(\operatorname{prox}_{\gamma R}(x)\right)_i = \begin{cases} x_i-\gamma,&x_i\gt \gamma\\ 0,&|x_i|\leq\gamma\\ x_i+\gamma,&x_i\lt -\gamma, \end{cases} }[/math]

which is known as the soft thresholding operator [math]\displaystyle{ S_{\gamma}(x)=\operatorname{prox}_{\gamma \|\cdot\|_1}(x) }[/math].[1][6]

Fixed point iterative schemes

To finally solve the lasso problem we consider the fixed point equation shown earlier:

[math]\displaystyle{ x^* = \operatorname{prox}_{\gamma R}\left(x^*-\gamma\nabla F(x^*)\right). }[/math]

Given that we have computed the form of the proximity operator explicitly, then we can define a standard fixed point iteration procedure. Namely, fix some initial [math]\displaystyle{ w^0\in\mathbb{R}^d }[/math], and for [math]\displaystyle{ k=1,2,\ldots }[/math] define

[math]\displaystyle{ w^{k+1} = S_{\gamma}\left(w^k - \gamma \nabla F\left(w^k\right)\right). }[/math]

Note here the effective trade-off between the empirical error term [math]\displaystyle{ F(w) }[/math] and the regularization penalty [math]\displaystyle{ R(w) }[/math]. This fixed point method has decoupled the effect of the two different convex functions which comprise the objective function into a gradient descent step ([math]\displaystyle{ w^k - \gamma \nabla F\left(w^k\right) }[/math]) and a soft thresholding step (via [math]\displaystyle{ S_\gamma }[/math]).

Convergence of this fixed point scheme is well-studied in the literature[1][6] and is guaranteed under appropriate choice of step size [math]\displaystyle{ \gamma }[/math] and loss function (such as the square loss taken here). Accelerated methods were introduced by Nesterov in 1983 which improve the rate of convergence under certain regularity assumptions on [math]\displaystyle{ F }[/math].[7] Such methods have been studied extensively in previous years.[8] For more general learning problems where the proximity operator cannot be computed explicitly for some regularization term [math]\displaystyle{ R }[/math], such fixed point schemes can still be carried out using approximations to both the gradient and the proximity operator.[4][9]

Practical considerations

There have been numerous developments within the past decade in convex optimization techniques which have influenced the application of proximal gradient methods in statistical learning theory. Here we survey a few important topics which can greatly improve practical algorithmic performance of these methods.[2][10]

Adaptive step size

In the fixed point iteration scheme

[math]\displaystyle{ w^{k+1} = \operatorname{prox}_{\gamma R}\left(w^k-\gamma \nabla F\left(w^k\right)\right), }[/math]

one can allow variable step size [math]\displaystyle{ \gamma_k }[/math] instead of a constant [math]\displaystyle{ \gamma }[/math]. Numerous adaptive step size schemes have been proposed throughout the literature.[1][4][11][12] Applications of these schemes[2][13] suggest that these can offer substantial improvement in number of iterations required for fixed point convergence.

Elastic net (mixed norm regularization)

Elastic net regularization offers an alternative to pure [math]\displaystyle{ \ell_1 }[/math] regularization. The problem of lasso ([math]\displaystyle{ \ell_1 }[/math]) regularization involves the penalty term [math]\displaystyle{ R(w) = \|w\|_1 }[/math], which is not strictly convex. Hence, solutions to [math]\displaystyle{ \min_w F(w) + R(w), }[/math] where [math]\displaystyle{ F }[/math] is some empirical loss function, need not be unique. This is often avoided by the inclusion of an additional strictly convex term, such as an [math]\displaystyle{ \ell_2 }[/math] norm regularization penalty. For example, one can consider the problem

[math]\displaystyle{ \min_{w\in\mathbb{R}^d} \frac{1}{n}\sum_{i=1}^n (y_i- \langle w,x_i\rangle)^2+ \lambda \left((1-\mu)\|w\|_1+\mu \|w\|_2^2\right), }[/math]

where [math]\displaystyle{ x_i\in \mathbb{R}^d\text{ and } y_i\in\mathbb{R}. }[/math] For [math]\displaystyle{ 0\lt \mu\leq 1 }[/math] the penalty term [math]\displaystyle{ \lambda \left((1-\mu)\|w\|_1+\mu \|w\|_2^2\right) }[/math] is now strictly convex, and hence the minimization problem now admits a unique solution. It has been observed that for sufficiently small [math]\displaystyle{ \mu \gt 0 }[/math], the additional penalty term [math]\displaystyle{ \mu \|w\|_2^2 }[/math] acts as a preconditioner and can substantially improve convergence while not adversely affecting the sparsity of solutions.[2][14]

Exploiting group structure

Proximal gradient methods provide a general framework which is applicable to a wide variety of problems in statistical learning theory. Certain problems in learning can often involve data which has additional structure that is known a priori. In the past several years there have been new developments which incorporate information about group structure to provide methods which are tailored to different applications. Here we survey a few such methods.

Group lasso

Group lasso is a generalization of the lasso method when features are grouped into disjoint blocks.[15] Suppose the features are grouped into blocks [math]\displaystyle{ \{w_1,\ldots,w_G\} }[/math]. Here we take as a regularization penalty

[math]\displaystyle{ R(w) =\sum_{g=1}^G \|w_g\|_2, }[/math]

which is the sum of the [math]\displaystyle{ \ell_2 }[/math] norm on corresponding feature vectors for the different groups. A similar proximity operator analysis as above can be used to compute the proximity operator for this penalty. Where the lasso penalty has a proximity operator which is soft thresholding on each individual component, the proximity operator for the group lasso is soft thresholding on each group. For the group [math]\displaystyle{ w_g }[/math] we have that proximity operator of [math]\displaystyle{ \lambda\gamma\left(\sum_{g=1}^G \|w_g\|_2\right) }[/math] is given by

[math]\displaystyle{ \widetilde{S}_{\lambda\gamma }(w_g) = \begin{cases} w_g-\lambda\gamma \frac{w_g}{\|w_g\|_2}, & \|w_g\|_2\gt \lambda\gamma \\ 0, & \|w_g\|_2\leq \lambda\gamma \end{cases} }[/math]

where [math]\displaystyle{ w_g }[/math] is the [math]\displaystyle{ g }[/math]th group.

In contrast to lasso, the derivation of the proximity operator for group lasso relies on the Moreau decomposition. Here the proximity operator of the conjugate of the group lasso penalty becomes a projection onto the ball of a dual norm.[2]

Other group structures

In contrast to the group lasso problem, where features are grouped into disjoint blocks, it may be the case that grouped features are overlapping or have a nested structure. Such generalizations of group lasso have been considered in a variety of contexts.[16][17][18][19] For overlapping groups one common approach is known as latent group lasso which introduces latent variables to account for overlap.[20][21] Nested group structures are studied in hierarchical structure prediction and with directed acyclic graphs.[18]

See also

References

  1. 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 Combettes, Patrick L.; Wajs, Valérie R. (2005). "Signal Recovering by Proximal Forward-Backward Splitting". Multiscale Model. Simul. 4 (4): 1168–1200. doi:10.1137/050626090. 
  2. 2.0 2.1 2.2 2.3 2.4 Mosci, S.; Rosasco, L.; Matteo, S.; Verri, A.; Villa, S. (2010). "Solving Structured Sparsity Regularization with Proximal Methods". Machine Learning and Knowledge Discovery in Databases. Lecture Notes in Computer Science. 6322. pp. 418–433. doi:10.1007/978-3-642-15883-4_27. ISBN 978-3-642-15882-7. 
  3. 3.0 3.1 Moreau, J.-J. (1962). "Fonctions convexes duales et points proximaux dans un espace hilbertien". Comptes Rendus de l'Académie des Sciences, Série A 255: 2897–2899. 
  4. 4.0 4.1 4.2 Bauschke, H.H., and Combettes, P.L. (2011). Convex analysis and monotone operator theory in Hilbert spaces. Springer. 
  5. 5.0 5.1 Tibshirani, R. (1996). "Regression shrinkage and selection via the lasso". J. R. Stat. Soc. Ser. B. 1 58 (1): 267–288. 
  6. 6.0 6.1 Daubechies, I.; Defrise, M.; De Mol, C. (2004). "An iterative thresholding algorithm for linear inverse problem with a sparsity constraint". Comm. Pure Appl. Math. 57 (11): 1413–1457. doi:10.1002/cpa.20042. 
  7. Nesterov, Yurii (1983). "A method of solving a convex programming problem with convergence rate [math]\displaystyle{ O(1/k^2) }[/math]". Soviet Mathematics - Doklady 27 (2): 372–376. 
  8. Nesterov, Yurii (2004). Introductory Lectures on Convex Optimization. Kluwer Academic Publisher. 
  9. Villa, S.; Salzo, S.; Baldassarre, L.; Verri, A. (2013). "Accelerated and inexact forward-backward algorithms". SIAM J. Optim. 23 (3): 1607–1633. doi:10.1137/110844805. 
  10. Bach, F.; Jenatton, R.; Mairal, J.; Obozinski, Gl. (2011). "Optimization with sparsity-inducing penalties". Foundations and Trends in Machine Learning 4 (1): 1–106. doi:10.1561/2200000015. Bibcode2011arXiv1108.0775B. 
  11. Loris, I.; Bertero, M.; De Mol, C.; Zanella, R.; Zanni, L. (2009). "Accelerating gradient projection methods for [math]\displaystyle{ \ell_1 }[/math]-constrained signal recovery by steplength selection rules". Applied & Comp. Harmonic Analysis 27 (2): 247–254. doi:10.1016/j.acha.2009.02.003. 
  12. Wright, S.J.; Nowak, R.D.; Figueiredo, M.A.T. (2009). "Sparse reconstruction by separable approximation". IEEE Trans. Image Process. 57 (7): 2479–2493. doi:10.1109/TSP.2009.2016892. Bibcode2009ITSP...57.2479W. 
  13. Loris, Ignace (2009). "On the performance of algorithms for the minimization of [math]\displaystyle{ \ell_1 }[/math]-penalized functionals". Inverse Problems 25 (3): 035008. doi:10.1088/0266-5611/25/3/035008. Bibcode2009InvPr..25c5008L. 
  14. De Mol, C.; De Vito, E.; Rosasco, L. (2009). "Elastic-net regularization in learning theory". J. Complexity 25 (2): 201–230. doi:10.1016/j.jco.2009.01.002. 
  15. Yuan, M.; Lin, Y. (2006). "Model selection and estimation in regression with grouped variables". J. R. Stat. Soc. B 68 (1): 49–67. doi:10.1111/j.1467-9868.2005.00532.x. 
  16. Chen, X.; Lin, Q.; Kim, S.; Carbonell, J.G.; Xing, E.P. (2012). "Smoothing proximal gradient method for general structured sparse regression". Ann. Appl. Stat. 6 (2): 719–752. doi:10.1214/11-AOAS514. 
  17. Mosci, S.; Villa, S.; Verri, A.; Rosasco, L. (2010). "A primal-dual algorithm for group sparse regularization with overlapping groups". NIPS 23: 2604–2612. 
  18. 18.0 18.1 Jenatton, R.; Audibert, J.-Y.; Bach, F. (2011). "Structured variable selection with sparsity-inducing norms". J. Mach. Learn. Res. 12: 2777–2824. Bibcode2009arXiv0904.3523J. 
  19. Zhao, P.; Rocha, G.; Yu, B. (2009). "The composite absolute penalties family for grouped and hierarchical variable selection". Ann. Stat. 37 (6A): 3468–3497. doi:10.1214/07-AOS584. Bibcode2009arXiv0909.0411Z. 
  20. Obozinski, Guillaume; Jacob, Laurent; Vert, Jean-Philippe (2011). "Group Lasso with Overlaps: The Latent Group Lasso approach". arXiv:1110.0413 [stat.ML].
  21. Villa, Silvia; Rosasco, Lorenzo; Mosci, Sofia; Verri, Alessandro (2012). "Proximal methods for the latent group lasso penalty". arXiv:1209.0368 [math.OC].




Licensed under CC BY-SA 3.0 | Source: https://handwiki.org/wiki/Proximal_gradient_methods_for_learning
9 views | Status: cached on August 02 2024 01:21:57
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF