Optimization Methods for Large-Scale Machine Learning
- L. Bottou, Frank E. Curtis, J. Nocedal
- Computer ScienceSIAM Review
- 15 June 2016
A major theme of this study is that large-scale machine learning represents a distinctive setting in which the stochastic gradient method has traditionally played a central role while conventional gradient-based nonlinear optimization techniques typically falter, leading to a discussion about the next generation of optimization methods for large- scale machine learning.
A Sequential Quadratic Programming Algorithm for Nonconvex, Nonsmooth Constrained Optimization
- Frank E. Curtis, M. Overton
- Mathematics, Computer ScienceSIAM Journal on Optimization
- 15 May 2012
A line search algorithm for situations when the objective and constraint functions are locally Lipschitz and continuously differentiable on open dense subsets of $\mathbb{R}^{n}$.
A globally convergent primal-dual active-set framework for large-scale convex quadratic optimization
- Frank E. Curtis, Zheng Han, Daniel P. Robinson
- Mathematics, Computer ScienceComputational optimization and applications
- 1 March 2015
This work presents a primal-dual active-set framework for solving large-scale convex quadratic optimization problems (QPs) and explains the relationship between this framework and semi-smooth Newton techniques, finding that this approach is globally convergent for strictly convex QPs.
Infeasibility Detection and SQP Methods for Nonlinear Optimization
- R. Byrd, Frank E. Curtis, J. Nocedal
- Computer ScienceSIAM Journal on Optimization
- 1 April 2010
A sequential quadratic programming method derived from an exact penalty approach that adjusts the penalty parameter automatically, when appropriate, to emphasize feasibility over optimality is presented.
Flexible penalty functions for nonlinear constrained optimization
- Frank E. Curtis, J. Nocedal
- Mathematics
- 1 October 2008
We propose a globalization strategy for nonlinear constrained optimization. The method employs a ‘flexible’ penalty function to promote convergence, where during each iteration the penalty parameter…
A penalty-interior-point algorithm for nonlinear constrained optimization
- Frank E. Curtis
- Computer ScienceMathematical Programming Computation
- 24 April 2012
The goal of this paper is to present a penalty-interior-point method that possesses the advantages of penalty and interior-point techniques, but does not suffer from their disadvantages.
A BFGS-SQP method for nonsmooth, nonconvex, constrained optimization and its evaluation using relative minimization profiles
- Frank E. Curtis, Tim Mitchell, M. Overton
- Computer ScienceOptim. Methods Softw.
- 2 January 2017
The proposed algorithm is a sequential quadratic optimization method that employs Broyden-Fletcher-Goldfarb-Shanno quasi-Newton Hessian approximations and an exact penalty function whose parameter is controlled using a steering strategy.
An Inexact SQP Method for Equality Constrained Optimization
- R. Byrd, Frank E. Curtis, J. Nocedal
- MathematicsSIAM Journal on Optimization
- 1 February 2008
An algorithm for large-scale equality constrained optimization based on a characterization of inexact sequential quadratic programming (SQP) steps that can ensure global convergence is presented.
ADMM for multiaffine constrained optimization
- Wenbo Gao, D. Goldfarb, Frank E. Curtis
- MathematicsOptim. Methods Softw.
- 26 February 2018
It is shown that ADMM, when employed to solve problems with multiaffine constraints that satisfy certain verifiable assumptions, converges to the set of constrained stationary points if the penalty parameter in the augmented Lagrangian is sufficiently large.
An inexact Newton method for nonconvex equality constrained optimization
- R. Byrd, Frank E. Curtis, J. Nocedal
- Computer Science, MathematicsMathematical programming
- 3 September 2009
A matrix-free line search algorithm for large-scale equality constrained optimization that allows for inexact step computations and focuses on sufficient reductions in a model of an exact penalty function.
...
...