An Adaptive Stochastic Sequential Quadratic Programming with Differentiable Exact Augmented Lagrangians

@article{Na2021AnAS,
  title={An Adaptive Stochastic Sequential Quadratic Programming with Differentiable Exact Augmented Lagrangians},
  author={Sen Na and Mihai Anitescu and Mladen Kolar},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.05320}
}
We consider solving nonlinear optimization problems with stochastic objective and deterministic equality constraints. We assume for the objective that its evaluation, gradient, and Hessian are inaccessible, while one can compute their stochastic estimates by, for example, subsampling. We propose a stochastic algorithm based on sequential quadratic programming (SQP) that uses a differentiable exact augmented Lagrangian as the merit function. To motivate our algorithm design, we first revisit and… 

Figures and Tables from this paper

Inequality Constrained Stochastic Nonlinear Optimization via Active-Set Sequential Quadratic Programming

This work proposes an active-set stochastic sequential quadratic programming algorithm that uses a differentiable exact augmented Lagrangian as the merit function, and adaptively selects the penalty parameters of the augmentedlagrangian, and performs Stochastic line search to decide the stepsize.

Hessian Averaging in Stochastic Newton Methods Achieves Superlinear Convergence

There exists a universal weighted averaging scheme that transitions to local convergence at an optimal stage, and still exhibits a superlinear convergence rate nearly (up to a logarithmic factor) matching that of uniform Hessian averaging.

Inexact Sequential Quadratic Optimization for Minimizing a Stochastic Objective Function Subject to Deterministic Nonlinear Equality Constraints

An algorithm that allows inexact subproblem solutions to be employed, which is particularly useful in large-scale settings when the matrices defining the subproblems are too large to form and/or factorize is proposed.

A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear Equality Constrained Optimization with Rank-Deficient Jacobians

The proposed sequential quadratic optimization algorithm both allows the use of stochastic objective gradient estimates and possesses convergence guarantees even in the setting in which the constraint Jacobians may be rank deficient.

A Fast Temporal Decomposition Procedure for Long-horizon Nonlinear Dynamic Programming

We propose a fast temporal decomposition procedure for solving long-horizon nonlinear dynamic programs. The core of the procedure is sequential quadratic programming (SQP), with a differentiable

An Adaptive Sampling Sequential Quadratic Programming Method for Equality Constrained Stochastic Optimization

A practical adaptive inexact stochastic sequential quadratic programming (PAIS-SQP) method is described and criteria for controlling the sample size and the accuracy in the solutions of the SQP subproblems based on the variance estimates obtained as the optimization progresses is proposed.

Fully Stochastic Trust-Region Sequential Quadratic Programming for Equality-Constrained Optimization Problems

The global almost sure convergence guarantee for TR-StoSQP is established, and its empirical performance on both a subset of problems in the CUTEst test set and constrained logistic regression problems using data from the LIBSVM collection is illustrated.

A Sequential Quadratic Programming Method with High Probability Complexity Bounds for Nonlinear Equality Constrained Stochastic Optimization

A step-search sequential quadratic programming method is proposed for solving nonlinear equality constrained stochastic optimization problems and a high-probability bound on the iteration complexity of the algorithm to approximate first-order stationarity is derived.

Accelerating Stochastic Sequential Quadratic Programming for Equality Constrained Optimization using Predictive Variance Reduction

Under reasonable assumptions, it is proved that a measure of first-order stationarity evaluated at the iterates generated by the proposed algorithm converges to zero in expectation from arbitrary starting points, for both constant and adaptive step size strategies.

Worst-Case Complexity of an SQP Method for Nonlinear Equality Constrained Stochastic Optimization

The overall complexity bound, which accounts for the adaptivity of the merit parameter sequence, shows that a result comparable to the unconstrained setting (with additional logarithmic factors) holds with high probability.

References

SHOWING 1-10 OF 141 REFERENCES

Sequential Quadratic Optimization for Nonlinear Equality Constrained Stochastic Optimization

Under reasonable assumptions, convergence (resp.,~convergence in expectation) from remote starting points is proved for the proposed deterministic (resp,~stochastic) algorithm.

Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization

The proposed framework extends the classic quasi-Newton methods working in deterministic settings to stochastic settings, and it is proved its almost sure convergence to stationary points.

SNOPT: An SQP Algorithm for Large-Scale Constrained Optimization

An SQP algorithm that uses a smooth augmented Lagrangian merit function and makes explicit provision for infeasibility in the original problem and the QP subproblems is discussed and a reduced-Hessian semidefinite QP solver (SQOPT) is discussed.

Adaptive Sampling Strategies for Stochastic Optimization

It is shown that the inner product test improves upon the well known norm test, and can be used as a basis for an algorithm that is globally convergent on nonconvex functions and enjoys a global linear rate of convergence on strongly convex functions.

Painless Stochastic Gradient: Interpolation, Line-Search, and Convergence Rates

This work proposes to use line-search techniques to automatically set the step-size when training models that can interpolate the data, and proves that SGD with a stochastic variant of the classic Armijo line- search attains the deterministic convergence rates for both convex and strongly-convex functions.

Stochastic Cubic Regularization for Fast Nonconvex Optimization

The proposed algorithm efficiently escapes saddle points and finds approximate local minima for general smooth, nonconvex functions in only $\mathcal{\tilde{O}}(\epsilon^{-3.5})$ stochastic gradient and stochastically Hessian-vector product evaluations.

Scalable Nonlinear Programming via Exact Differentiable Penalty Functions and Trust-Region Newton Methods

An approach for nonlinear programming based on the direct minimization of an exact differentiable penalty function using trust-region Newton techniques that provides desirable features required for scalability and presents features that are desirable for parametric optimization problems that must be solved in a latency-limited environment.

Exact penalty function algorithms for finite dimensional and control optimization problems

In this thesis first and second order algorithms are proposed for solving equality constrained finite dimensional minimization problems and optimal control problems with terminal equality constraints using the exact penalty function approach.

On the Convergence of Stochastic Gradient Descent with Adaptive Stepsizes

This paper theoretically analyzes in the convex and non-convex settings a generalized version of the AdaGrad stepsizes, and shows sufficient conditions for these stepsizes to achieve almost sure asymptotic convergence of the gradients to zero, proving the first guarantee for generalized AdaGrad Stepsizes in the non- Convex setting.

Recursive quadratic programming algorithm that uses an exact augmented Lagrangian function

An algorithm for nonlinear programming problems with equality constraints is presented which is globally and superlinearly convergent and incorporates an automatic adjustment rule for the selection of the penalty parameter and avoids the need to evaluate second-order derivatives of the problem functions.
...