A Direct Search Optimization Method That Models the Objective and Constraint Functions by Linear Interpolation
@inproceedings{Powell1994ADS, title={A Direct Search Optimization Method That Models the Objective and Constraint Functions by Linear Interpolation}, author={M. J. D. Powell}, year={1994} }
An iterative algorithm is proposed for nonlinearly constrained optimization calculations when there are no derivatives. Each iteration forms linear approximations to the objective and constraint functions by interpolation at the vertices of a simplex and a trust region bound restricts each change to the variables. Thus a new vector of variables is calculated, which may replace one of the current vertices, either to improve the shape of the simplex or because it is the best vector that has been…
1,015 Citations
A derivative-free exact penalty algorithm: basic ideas, convergence theory and computational studies
- Computer ScienceComputational and Applied Mathematics
- 2022
A trust-region method for the solution of constrained optimization problems where the gradients of the objective and constraints are not available, by the use of an exact penalty, which converges directly to a constrained optimum, once a sufficiently high penalty is selected.
Direct search algorithms for optimization calculations
- Computer ScienceActa Numerica
- 1998
Line search methods, the restriction of vectors of variables to discrete grids, the use of geometric simplices, conjugate direction procedures, trust region algorithms that form linear or quadratic approximations to the objective function, and simulated annealing are addressed.
On the Lagrange functions of quadratic models that are defined by interpolation*
- Mathematics
- 2001
Quadratic models are of fundamental importance to the efficiency of many optimization algorithms when second derivatives of the objective function influence the required values of the variables. They…
Inexact Restoration method for nonlinear optimization without derivatives
- Computer ScienceJ. Comput. Appl. Math.
- 2015
On trust region methods for unconstrained minimization without derivatives
- MathematicsMath. Program.
- 2003
Algorithms for unconstrained minimization without derivatives that form linear or quadratic models by interpolation to values of the objective function are considered, because numerical experiments show that they are often more efficient than full quadRatic models for general objective functions.
Global convergence of a derivative-free inexact restoration filter algorithm for nonlinear programming
- Computer Science
- 2017
This work presents an algorithm for solving constrained optimization problems that does not make explicit use of the objective function derivatives, and proves that the full steps are efficient in the sense that near a feasible nonstationary point, the decrease in the objectivefunction is relatively large, ensuring the global convergence results of the algorithm.
On the convergence of trust region algorithms for unconstrained minimization without derivatives
- MathematicsComput. Optim. Appl.
- 2012
It is proved that, if F is bounded below, if ∇2F is also bounded, and if the number of iterations is infinite, then the sequence of gradients of F, k, converges to zero, where $\underline{x}_{\,k}$ is the centre of the trust region of the k-th iteration.
Numerical experience with a derivative-free trust-funnel method for nonlinear optimization problems with general nonlinear constraints
- Computer ScienceOptim. Methods Softw.
- 2016
This work exploits techniques developed for derivative-free optimization (DFO) to obtain a method that can also be used to solve problems where the derivatives are unavailable or are available at a prohibitive cost and compares favourably to other well-known model-based algorithms for DFO.
A sequential quadratic programming algorithm for equality-constrained optimization without derivatives
- Computer ScienceOptim. Lett.
- 2016
A new model-based trust-region derivative-free optimization algorithm which can handle nonlinear equality constraints by applying a sequential quadratic programming (SQP) approach is presented and the implementation of such a method can be enhanced to outperform well-known DFO packages on smooth equality-constrained optimization problems.
Wedge trust region methods for derivative free optimization
- MathematicsMath. Program.
- 2002
Abstract.A new method for derivative-free optimization is presented. It is designed for solving problems in which the objective function is smooth and the number of variables is moderate, but the…
References
SHOWING 1-10 OF 10 REFERENCES
An extension of the simplex method to constrained nonlinear optimization
- Computer Science
- 1989
The simplex algorithm of Nelder and Mead is extended to handle nonlinear optimization problems with constraints, and a delayed reflection is introduced to prevent the simplex from collapsing into a subspace near the constraints.
Sequential Application of Simplex Designs in Optimisation and Evolutionary Operation
- Biology
- 1962
A technique for empirical optimisation is presented in which a sequence of experimental designs each in the form of a regular or irregular simplex is used, each simplex having all vertices but one in…
A Simplex Method for Function Minimization
- MathematicsComput. J.
- 1965
A method is described for the minimization of a function of n variables, which depends on the comparison of function values at the (n 41) vertices of a general simplex, followed by the replacement of…
Practical Methods of Optimization
- Computer Science
- 1988
The aim of this book is to provide a Discussion of Constrained Optimization and its Applications to Linear Programming and Other Optimization Problems.
A Nonlinear Programming Technique for the Optimization of Continuous Processing Systems
- Computer Science
- 1961
A numerical example, a model construction example, and a description of a particular existing computer system are included in order to clarify the mode of operation of the method.
More test examples for nonlinear programming codes
- Computer Science
- 1981
The purpose of this note is to point out how an interested mathematical programmer could obtain computer programs of more than 120 constrained nonlinear programming problems which have been used in the past to test and compare optimization codes.
Test examples for nonlinear programming codes
- Computer Science
- 1980
The purpose of this note is to point out how an interested mathematical programmer could obtain computer programs of more than 120 constrained nonlinear programming problems which have been used in the past to test and compare optimization codes.
An Automatic Method for Finding the Greatest or Least Value of a Function
- Computer ScienceComput. J.
- 1960