Sequential linear-quadratic programming (SLQP) is an iterative method for nonlinear optimization problems where objective function and constraints are twice continuously differentiable. Similarly to sequential quadratic programming (SQP), SLQP proceeds by solving a sequence of optimization subproblems. The difference between the two approaches is that:
- in SQP, each subproblem is a quadratic program, with a quadratic model of the objective subject to a linearization of the constraints
- in SLQP, two subproblems are solved at each step: a linear program (LP) used to determine an active set, followed by an equality-constrained quadratic program (EQP) used to compute the total step
This decomposition makes SLQP suitable to large-scale optimization problems, for which efficient LP and EQP solvers are available, these problems being easier to scale than full-fledged quadratic programs.
It may be considered related to, but distinct from, quasi-Newton methods.
Algorithm basics
Consider a nonlinear programming problem of the form:
- [math]\displaystyle{ \begin{array}{rl}
\min\limits_{x} & f(x) \\
\mbox{s.t.} & b(x) \ge 0 \\
& c(x) = 0.
\end{array} }[/math]
The Lagrangian for this problem is[1]
- [math]\displaystyle{ \mathcal{L}(x,\lambda,\sigma) = f(x) - \lambda^T b(x) - \sigma^T c(x), }[/math]
where [math]\displaystyle{ \lambda \ge 0 }[/math] and [math]\displaystyle{ \sigma }[/math] are Lagrange multipliers.
LP phase
In the LP phase of SLQP, the following linear program is solved:
- [math]\displaystyle{ \begin{array}{rl}
\min\limits_{d} & f(x_k) + \nabla f(x_k)^Td\\
\mathrm{s.t.} & b(x_k) + \nabla b(x_k)^Td \ge 0 \\
& c(x_k) + \nabla c(x_k)^T d = 0. \end{array} }[/math]
Let [math]\displaystyle{ {\cal A}_k }[/math] denote the active set at the optimum [math]\displaystyle{ d^*_{\text{LP}} }[/math] of this problem, that is to say, the set of constraints that are equal to zero at [math]\displaystyle{ d^*_{\text{LP}} }[/math]. Denote by [math]\displaystyle{ b_{{\cal A}_k} }[/math] and [math]\displaystyle{ c_{{\cal A}_k} }[/math] the sub-vectors of [math]\displaystyle{ b }[/math] and [math]\displaystyle{ c }[/math] corresponding to elements of [math]\displaystyle{ {\cal A}_k }[/math].
EQP phase
In the EQP phase of SLQP, the search direction [math]\displaystyle{ d_k }[/math] of the step is obtained by solving the following equality-constrained quadratic program:
- [math]\displaystyle{ \begin{array}{rl}
\min\limits_{d} & f(x_k) + \nabla f(x_k)^Td + \tfrac{1}{2} d^T \nabla_{xx}^2 \mathcal{L}(x_k,\lambda_k,\sigma_k) d\\
\mathrm{s.t.} & b_{{\cal A}_k}(x_k) + \nabla b_{{\cal A}_k}(x_k)^Td = 0 \\
& c_{{\cal A}_k}(x_k) + \nabla c_{{\cal A}_k}(x_k)^T d = 0. \end{array} }[/math]
Note that the term [math]\displaystyle{ f(x_k) }[/math] in the objective functions above may be left out for the minimization problems, since it is constant.
See also
- Newton's method
- Secant method
- Sequential linear programming
- Sequential quadratic programming
Notes
- ↑ Jorge Nocedal and Stephen J. Wright (2006). Numerical Optimization. Springer.. ISBN 0-387-30303-0. http://www.ece.northwestern.edu/~nocedal/book/num-opt.html.
References
- Jorge Nocedal and Stephen J. Wright (2006). Numerical Optimization. Springer.. ISBN 0-387-30303-0. http://www.ece.northwestern.edu/~nocedal/book/num-opt.html.
Optimization: Algorithms, methods, and heuristics |
|---|
Unconstrained nonlinear |
|---|
| … functions |
- Golden-section search
- Interpolation methods
- Line search
- Nelder–Mead method
- Successive parabolic interpolation
|
|---|
| … and gradients | | Convergence |
- Trust region
- Wolfe conditions
|
|---|
| Quasi–Newton |
- Berndt–Hall–Hall–Hausman
- Broyden–Fletcher–Goldfarb–Shanno and L-BFGS
- Davidon–Fletcher–Powell
- Symmetric rank-one (SR1)
|
|---|
| Other methods |
- Gauss–Newton
- Gradient
- Levenberg–Marquardt
- Conjugate gradient
- Truncated Newton
|
|---|
|
|---|
| … and Hessians | |
|---|
|
| |
Constrained nonlinear |
|---|
| General |
- Barrier methods
- Penalty methods
|
|---|
| Differentiable |
- Augmented Lagrangian methods
- Sequential quadratic programming
- Successive linear programming
|
|---|
|
|
Convex optimization |
|---|
Convex minimization |
- Cutting-plane method
- Reduced gradient (Frank–Wolfe)
- Subgradient method
|
|---|
Linear and quadratic | | Interior point |
- Affine scaling
- Ellipsoid algorithm of Khachiyan
- Projective algorithm of Karmarkar
|
|---|
| Basis-exchange |
- Simplex algorithm of Dantzig
- Revised simplex algorithm
- Criss-cross algorithm
- Principal pivoting algorithm of Lemke
|
|---|
|
|---|
|
|
Combinatorial |
|---|
| Paradigms |
- Approximation algorithm
- Dynamic programming
- Greedy algorithm
- Integer programming
|
|---|
Graph algorithms |
- Bellman–Ford
- Dijkstra
- Floyd–Warshall
|
|---|
| Network flows |
Dinic
Edmonds–Karp
Ford–Fulkerson
Push–relabel maximum flow
|
|---|
|
|
Metaheuristics |
|---|
- Evolutionary algorithm
- Hill climbing
- Local search
- Simulated annealing
- Tabu search
|
|
|
 | Original source: https://en.wikipedia.org/wiki/Sequential linear-quadratic programming. Read more |