• Corpus ID: 17660202

A derivative-free trust-region augmented Lagrangian algorithm

  title={A derivative-free trust-region augmented Lagrangian algorithm},
  author={Charles Audet and S{\'e}bastien Le Digabel and Mathilde Peyrega},
We present a new derivative-free trust-region (DFTR) algorithm to solve general nonlinear constrained problems with the use of an augmented Lagrangian method. No derivatives are used, neither for the objective function nor for the constraints. An augmented Lagrangian method, known as an effective tool to solve equality and inequality constrained optimization problems with derivatives, is exploited to minimize the subproblems, composed of quadratic models that approximate the original objective… 

Figures and Tables from this paper

Efficient solution of quadratically constrained quadratic subproblems within the MADS algorithm ∗

This work explores different algorithms that exploit the structure of the quadratic models: the first one applies an l1 exact penalty function, the second uses an augmented Lagrangian and the third one combines the former two, resulting in a new algorithm.



A trust-region derivative-free algorithm for constrained optimization

A trust-region algorithm for constrained optimization problems in which the derivatives of the objective function are not available that is approximated by a model obtained by quadratic interpolation, which is then minimized within the intersection of the feasible set with the trust region.

Numerical experience with a derivative-free trust-funnel method for nonlinear optimization problems with general nonlinear constraints

This work exploits techniques developed for derivative-free optimization (DFO) to obtain a method that can also be used to solve problems where the derivatives are unavailable or are available at a prohibitive cost and compares favourably to other well-known model-based algorithms for DFO.

On convergence analysis of a derivative-free trust region algorithm for constrained optimization with separable structure

A derivative-free trust region algorithm for constrained minimization problems with separable structure, where derivatives of the objective function are not available and can not be directly approximated.

A sequential quadratic programming algorithm for equality-constrained optimization without derivatives

A new model-based trust-region derivative-free optimization algorithm which can handle nonlinear equality constraints by applying a sequential quadratic programming (SQP) approach is presented and the implementation of such a method can be enhanced to outperform well-known DFO packages on smooth equality-constrained optimization problems.

A progressive barrier derivative-free trust-region algorithm for constrained optimization

Computational experiments on 40 smooth constrained problems indicate that the proposed trust-region method is competitive with COBYLA, and experiments show that it can be competitive with the NOMAD software.

A Direct Search Optimization Method That Models the Objective and Constraint Functions by Linear Interpolation

An iterative algorithm for nonlinearly constrained optimization calculations when there are no derivatives, where a new vector of variables is calculated, which may replace one of the current vertices, either to improve the shape of the simplex or because it is the best vector that has been found so far.

A derivative-free trust-funnel method for equality-constrained nonlinear optimization

A new derivative-free method is proposed for solving equality-constrained nonlinear optimization problems based on the use of polynomial interpolation models and uses a self-correcting geometry procedure to ensure that the interpolation problem is well defined.

A Direct Search Approach to Nonlinear Programming Problems Using an Augmented Lagrangian Method with Explicit Treatment of Linear Constraints

This paper shows that using a generating set search method for solving problems with linear constraints, this approach can solve the linearly constrained subproblems with sufficient accuracy to satisfy the analytic requirements of the general framework, even though it does not have explicit recourse to the gradient of the augmented Lagrangian function.

A Globally Convergent Augmented Lagrangian Pattern Search Algorithm for Optimization with General Constraints and Simple Bounds

This work gives a pattern search method for nonlinearly constrained optimization that is an adaption of a bound constrained augmented Lagrangian method first proposed by Conn, Gould, and Toint and is the first provably convergent directsearch method for general nonlinear programming.