A progressive barrier derivative-free trust-region algorithm for constrained optimization

  title={A progressive barrier derivative-free trust-region algorithm for constrained optimization},
  author={Charles Audet and Andrew R. Conn and S{\'e}bastien Le Digabel and Mathilde Peyrega},
  journal={Computational Optimization and Applications},
We study derivative-free constrained optimization problems and propose a trust-region method that builds linear or quadratic models around the best feasible and around the best infeasible solutions found so far. These models are optimized within a trust region, and the progressive barrier methodology handles the constraints by progressively pushing the infeasible solutions toward the feasible domain. Computational experiments on 40 smooth constrained problems indicate that the proposed method… 

A derivative-free trust-region augmented Lagrangian algorithm

A new derivative-free trust-region (DFTR) algorithm to solve general nonlinear constrained problems with the use of an augmented Lagrangian method that outperforms HOPSPACK and is competitive with COBYLA.

A merit function approach for evolution strategies

  • Y. Diouane
  • Computer Science
    EURO J. Comput. Optim.
  • 2021

Handling of constraints in multiobjective blackbox optimization

It is proved that the integration of two new constraint-handling approaches into the blackbox constrained multiobjective optimization algorithm DMulti-MADS, an extension of the Mesh Adaptive Direct Search algorithm for single-objective constrained optimization, are competitive with other state-of-the-art algorithms.

Modifier Adaptation Meets Bayesian Optimization and Derivative-Free Optimization

This paper investigates a new class of modifier-adaptation schemes to overcome plant-model mismatch in real-time optimization of uncertain processes through the integration of concepts from the areas of Bayesian optimization and derivative-free optimization.

Constraint scaling in the Mesh Adaptative Direct Search algorithm

The present work proposes a dynamic methodology to select weights for each constraint in the Mesh Adaptive Direct Search (MADS) algorithm with the progressive barrier.

Constrained blackbox optimization with the NOMAD solver on the COCO constrained test suite

The mesh adaptive direct search (MADS) derivative-free optimization algorithm using the progressive barrier strategy to handle quantifiable and relaxable constraints is described and tested on the new bbob-constrained suite of analytical constrained problems from the COCO platform, and compared with the CMA-ES heuristic.

A derivative-free approach to optimal control problems with piecewise constant Mayer cost function

Numerical simulations are performed on some standard control systems to show the efficiency of the hybrid method, where Nomad and Ipopt are used as, respectively, derivative-free optimization and smooth optimization solvers.

Model-Based Methods in Derivative-Free Nonsmooth Optimization

This chapter surveys some of the progress of model-based DFO for nonsmooth functions and discusses methods for constructing models of smooth functions and their accuracy.



A trust-region derivative-free algorithm for constrained optimization

A trust-region algorithm for constrained optimization problems in which the derivatives of the objective function are not available that is approximated by a model obtained by quadratic interpolation, which is then minimized within the intersection of the feasible set with the trust region.

On convergence analysis of a derivative-free trust region algorithm for constrained optimization with separable structure

A derivative-free trust region algorithm for constrained minimization problems with separable structure, where derivatives of the objective function are not available and can not be directly approximated.

A derivative-free trust-funnel method for equality-constrained nonlinear optimization

A new derivative-free method is proposed for solving equality-constrained nonlinear optimization problems based on the use of polynomial interpolation models and uses a self-correcting geometry procedure to ensure that the interpolation problem is well defined.

Numerical experience with a derivative-free trust-funnel method for nonlinear optimization problems with general nonlinear constraints

This work exploits techniques developed for derivative-free optimization (DFO) to obtain a method that can also be used to solve problems where the derivatives are unavailable or are available at a prohibitive cost and compares favourably to other well-known model-based algorithms for DFO.

Recent advances in trust region algorithms

Recent results on trust region methods for unconstrained optimization, constrained optimization, nonlinear equations and nonlinear least squares, nonsmooth optimization and optimization without derivatives are reviewed.

A sequential quadratic programming algorithm for equality-constrained optimization without derivatives

A new model-based trust-region derivative-free optimization algorithm which can handle nonlinear equality constraints by applying a sequential quadratic programming (SQP) approach is presented and the implementation of such a method can be enhanced to outperform well-known DFO packages on smooth equality-constrained optimization problems.

Global Convergence of General Derivative-Free Trust-Region Algorithms to First- and Second-Order Critical Points

This paper proves global convergence for first- and second-order stationary points of a class of derivative-free trust-region methods for unconstrained optimization based on the sequential minimization of quadratic models built from evaluating the objective function at sample sets.

Global convergence of a class of trust region algorithms for optimization with simple bounds

It is shown that, when the strict complementarily condition holds, the proposed algorithms reduce to an unconstrained calculation after finitely many iterations, allowing a fast asymptotic rate of convergence.

A Progressive Barrier for Derivative-Free Nonlinear Programming

The LTMads-PB is a useful practical extension of the earlier L TMads-EB algorithm, particularly in the common case for real problems where no feasible point is known, and it does as well when feasible points are known.

A Direct Search Optimization Method That Models the Objective and Constraint Functions by Linear Interpolation

An iterative algorithm for nonlinearly constrained optimization calculations when there are no derivatives, where a new vector of variables is calculated, which may replace one of the current vertices, either to improve the shape of the simplex or because it is the best vector that has been found so far.