A globally convergent primal-dual active-set framework for large-scale convex quadratic optimization

@article{Curtis2015AGC,
  title={A globally convergent primal-dual active-set framework for large-scale convex quadratic optimization},
  author={Frank E. Curtis and Zheng Han and Daniel P. Robinson},
  journal={Computational Optimization and Applications},
  year={2015},
  volume={60},
  pages={311-341}
}
We present a primal-dual active-set framework for solving large-scale convex quadratic optimization problems (QPs). In contrast to classical active-set methods, our framework allows for multiple simultaneous changes in the active-set estimate, which often leads to rapid identification of the optimal active-set regardless of the initial estimate. The iterates of our framework are the active-set estimates themselves, where for each a primal-dual solution is uniquely defined via a reduced… 

Globally Convergent Primal-Dual Active-Set Methods with Inexact Subproblem Solves

Three primal-dual active-set (PDAS) methods for solving large-scale instances of an important class of convex quadratic optimization problems (QPs) that allow inexactness in the (reduced) linear system solves at all partitions except optimal ones.

Active-set methods for convex quadratic programming

Computational methods are proposed for solving a convex quadratic program (QP). Active-set methods are defined for a particular primal and dual formulation of a QP with general equality constraints

A Feasible Active Set Method for Strictly Convex Quadratic Problems with Simple Bounds

A primal-dual active set method for quadratic problems with bound constraints is presented which extends the infeasible active set approach of Kunisch and Rendl and performs well in practice.

A dual gradient-projection method for large-scale strictly convex quadratic problems

The details of a solver for minimizing a strictly convex quadratic objective function subject to general linear constraints are presented and how the linear algebra may be arranged to take computational advantage of sparsity in the second-derivative matrix is shown.

A dual gradient-projection method for large-scale strictly convex quadratic problems

The details of a solver for minimizing a strictly convex quadratic objective function subject to general linear constraints are presented and how the linear algebra may be arranged to take computational advantage of sparsity in the second-derivative matrix is shown.

A recursive semi-smooth Newton method for linear complementarity problems∗

A primal feasible active set method is presented for finding the unique solution of a Linear Complementarity Problem (LCP) with a P -matrix, which extends the globally convergent active set method

An Infeasible Active Set Method with Combinatorial Line Search for Convex Quadratic Problems with Bound Constraints ∗

This paper introduces yet another modified version of this active set method, which aims at maintaining the combinatorial flavour of the original semismooth Newton method and proves global convergence for this modified version and shows it to be competitive on a variety of difficult classes of test problems.

A Reduced-Space Algorithm for Minimizing ℓ1-Regularized Convex Functions

A convergence guarantee is proved for the new method for minimizing the sum of a differentiable convex function and an $\ell_1$-norm regularizer and its efficiency is demonstrated on a large set of model prediction problems.

A limited-memory quasi-Newton algorithm for bound-constrained non-smooth optimization

An algorithm is proposed that uses the L-BFGS quasi-Newton approximation of the problem's curvature together with a variant of the weak Wolfe line search to overcome the inherent shortsightedness of the gradient for a non-smooth function.

References

SHOWING 1-10 OF 47 REFERENCES

A Feasible Active Set QP-Free Method for Nonlinear Programming

It is shown that the method converges globally to KKT points under the linear independence constraint qualification (LICQ), and the asymptotic rate of convergence is Q-superlinear under an additional strong second-order sufficient condition (SSOSC) without strict complementarity.

An Infeasible Active Set Method for Quadratic Problems with Simple Bounds

A primal-dual active set method for quadratic problems with bound constraints is presented, based on a guess on the active set, that satisfies the first order optimality condition and the complementarity condition.

Application of the dual active set algorithm to quadratic network optimization

This study shows that combining the new algorithm with the nonlinear conjugate gradient method is particularly effective on difficult network problems from the literature.

SNOPT: An SQP Algorithm for Large-Scale Constrained Optimization

An SQP algorithm that uses a smooth augmented Lagrangian merit function and makes explicit provision for infeasibility in the original problem and the QP subproblems is discussed and a reduced-Hessian semidefinite QP solver (SQOPT) is discussed.

A Second Derivative SQP Method: Global Convergence

This paper presents a second derivative SQP method based on quadratic subproblems that are either convex, and thus may be solved efficiently, or need not be solved globally.

A family of second-order methods for convex $$\ell _1$$ℓ1-regularized optimization

A new active set method is proposed that performs multiple changes in the active manifold estimate at every iteration, and employs a mechanism for correcting these estimates, when needed.

qpOASES: a parametric active-set algorithm for quadratic programming

The open-source C++ software package qpOASES is described, which implements a parametric active-set method in a reliable and efficient way and can be used to compute critical points of nonconvex QP problems.

A second-derivative SQP method with a ‘trust-region-free’ predictor step

The modified algorithm remains globally convergent and preserves local superlinear convergence provided that a nonmonotone strategy is incorporated and it is proved that the method is globally and locally superlinearly convergent under common assumptions.

Nonconvergence of the plain Newton-min algorithm for linear complementarity problems with a P-matrix

It is shown that the plain Newton-min algorithm to solve the linear complementarity problem (LCP for short) no longer holds when M is a P-matrix of order ≥ 3, since then the algorithm may cycle.