An Adaptive Stochastic Sequential Quadratic Programming with Differentiable Exact Augmented Lagrangians
@article{Na2021AnAS, title={An Adaptive Stochastic Sequential Quadratic Programming with Differentiable Exact Augmented Lagrangians}, author={Sen Na and Mihai Anitescu and Mladen Kolar}, journal={ArXiv}, year={2021}, volume={abs/2102.05320} }
We consider solving nonlinear optimization problems with stochastic objective and deterministic equality constraints. We assume for the objective that its evaluation, gradient, and Hessian are inaccessible, while one can compute their stochastic estimates by, for example, subsampling. We propose a stochastic algorithm based on sequential quadratic programming (SQP) that uses a differentiable exact augmented Lagrangian as the merit function. To motivate our algorithm design, we first revisit and…
16 Citations
Asymptotic Convergence Rate and Statistical Inference for Stochastic Sequential Quadratic Programming
- Mathematics
- 2022
We apply a stochastic sequential quadratic programming (StoSQP) algorithm to solve constrained nonlinear optimization problems, where the objective is stochastic and the constraints are in equality…
Inequality Constrained Stochastic Nonlinear Optimization via Active-Set Sequential Quadratic Programming
- Computer ScienceMathematical Programming
- 2023
An active-set stochastic sequential quadratic programming (StoSQP) algorithm that utilizes a differentiable exact augmented Lagrangian as the merit function that allows nonlinear inequality constraints without requiring the strict complementary condition.
Hessian Averaging in Stochastic Newton Methods Achieves Superlinear Convergence
- Computer Science, MathematicsMathematical Programming
- 2022
There exists a universal weighted averaging scheme that transitions to local convergence at an optimal stage, and still exhibits a superlinear convergence rate nearly (up to a logarithmic factor) matching that of uniform Hessian averaging.
A Sequential Quadratic Programming Method for Optimization with Stochastic Objective Functions, Deterministic Inequality Constraints and Robust Subproblems
- Mathematics
- 2023
In this paper, a robust sequential quadratic programming method of [1] for constrained optimization is generalized to problem with stochastic objective function, deterministic equality and inequality…
Inexact Sequential Quadratic Optimization for Minimizing a Stochastic Objective Function Subject to Deterministic Nonlinear Equality Constraints
- Computer Science, Mathematics
- 2021
An algorithm that allows inexact subproblem solutions to be employed, which is particularly useful in large-scale settings when the matrices defining the subproblems are too large to form and/or factorize is proposed.
A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear Equality Constrained Optimization with Rank-Deficient Jacobians
- Computer Science
- 2021
The proposed sequential quadratic optimization algorithm both allows the use of stochastic objective gradient estimates and possesses convergence guarantees even in the setting in which the constraint Jacobians may be rank deficient.
A Fast Temporal Decomposition Procedure for Long-horizon Nonlinear Dynamic Programming
- Computer Science
- 2021
We propose a fast temporal decomposition procedure for solving long-horizon nonlinear dynamic programs. The core of the procedure is sequential quadratic programming (SQP), with a differentiable…
An Adaptive Sampling Sequential Quadratic Programming Method for Equality Constrained Stochastic Optimization
- Computer Science
- 2022
A practical adaptive inexact stochastic sequential quadratic programming (PAIS-SQP) method is described and criteria for controlling the sample size and the accuracy in the solutions of the SQP subproblems based on the variance estimates obtained as the optimization progresses is proposed.
Sequential Quadratic Optimization for Stochastic Optimization with Deterministic Nonlinear Inequality and Equality Constraints
- Computer Science
- 2023
A sequential quadratic optimization algorithm for minimizing an objective function defined by an expectation subject to nonlinear inequality and equality constraints is proposed, analyzed, and tested and is proved to possess convergence guarantees in expectation.
Fully Stochastic Trust-Region Sequential Quadratic Programming for Equality-Constrained Optimization Problems
- Computer Science, Mathematics
- 2022
The global almost sure convergence guarantee for TR-StoSQP is established, and its empirical performance on both a subset of problems in the CUTEst test set and constrained logistic regression problems using data from the LIBSVM collection is illustrated.
References
SHOWING 1-10 OF 141 REFERENCES
Sequential Quadratic Optimization for Nonlinear Equality Constrained Stochastic Optimization
- Computer ScienceSIAM J. Optim.
- 2021
Under reasonable assumptions, convergence (resp.,~convergence in expectation) from remote starting points is proved for the proposed deterministic (resp,~stochastic) algorithm.
Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- Computer Science, MathematicsSIAM J. Optim.
- 2017
The proposed framework extends the classic quasi-Newton methods working in deterministic settings to stochastic settings, and it is proved its almost sure convergence to stationary points.
SNOPT: An SQP Algorithm for Large-Scale Constrained Optimization
- Computer ScienceSIAM J. Optim.
- 2002
An SQP algorithm that uses a smooth augmented Lagrangian merit function and makes explicit provision for infeasibility in the original problem and the QP subproblems is discussed and a reduced-Hessian semidefinite QP solver (SQOPT) is discussed.
Robust Stochastic Approximation Approach to Stochastic Programming
- Computer Science, MathematicsSIAM J. Optim.
- 2009
It is intended to demonstrate that a properly modified SA approach can be competitive and even significantly outperform the SAA method for a certain class of convex stochastic problems.
Adaptive Sampling Strategies for Stochastic Optimization
- Mathematics, Computer ScienceSIAM J. Optim.
- 2018
It is shown that the inner product test improves upon the well known norm test, and can be used as a basis for an algorithm that is globally convergent on nonconvex functions and enjoys a global linear rate of convergence on strongly convex functions.
Painless Stochastic Gradient: Interpolation, Line-Search, and Convergence Rates
- Computer ScienceNeurIPS
- 2019
This work proposes to use line-search techniques to automatically set the step-size when training models that can interpolate the data, and proves that SGD with a stochastic variant of the classic Armijo line- search attains the deterministic convergence rates for both convex and strongly-convex functions.
Stochastic Cubic Regularization for Fast Nonconvex Optimization
- Computer Science, MathematicsNeurIPS
- 2018
The proposed algorithm efficiently escapes saddle points and finds approximate local minima for general smooth, nonconvex functions in only $\mathcal{\tilde{O}}(\epsilon^{-3.5})$ stochastic gradient and stochastically Hessian-vector product evaluations.
Scalable Nonlinear Programming via Exact Differentiable Penalty Functions and Trust-Region Newton Methods
- Computer Science, MathematicsSIAM J. Optim.
- 2014
An approach for nonlinear programming based on the direct minimization of an exact differentiable penalty function using trust-region Newton techniques that provides desirable features required for scalability and presents features that are desirable for parametric optimization problems that must be solved in a latency-limited environment.
Exact penalty function algorithms for finite dimensional and control optimization problems
- Mathematics, Computer Science
- 1978
In this thesis first and second order algorithms are proposed for solving equality constrained finite dimensional minimization problems and optimal control problems with terminal equality constraints using the exact penalty function approach.
On the Convergence of Stochastic Gradient Descent with Adaptive Stepsizes
- Computer Science, MathematicsAISTATS
- 2019
This paper theoretically analyzes in the convex and non-convex settings a generalized version of the AdaGrad stepsizes, and shows sufficient conditions for these stepsizes to achieve almost sure asymptotic convergence of the gradients to zero, proving the first guarantee for generalized AdaGrad Stepsizes in the non- Convex setting.