# A Direct Search Optimization Method That Models the Objective and Constraint Functions by Linear Interpolation

```@inproceedings{Powell1994ADS,
title={A Direct Search Optimization Method That Models the Objective and Constraint Functions by Linear Interpolation},
author={M. J. D. Powell},
year={1994}
}```
An iterative algorithm is proposed for nonlinearly constrained optimization calculations when there are no derivatives. Each iteration forms linear approximations to the objective and constraint functions by interpolation at the vertices of a simplex and a trust region bound restricts each change to the variables. Thus a new vector of variables is calculated, which may replace one of the current vertices, either to improve the shape of the simplex or because it is the best vector that has been…
1,015 Citations
• Computer Science
Computational and Applied Mathematics
• 2022
A trust-region method for the solution of constrained optimization problems where the gradients of the objective and constraints are not available, by the use of an exact penalty, which converges directly to a constrained optimum, once a sufficiently high penalty is selected.
Line search methods, the restriction of vectors of variables to discrete grids, the use of geometric simplices, conjugate direction procedures, trust region algorithms that form linear or quadratic approximations to the objective function, and simulated annealing are addressed.
Quadratic models are of fundamental importance to the efficiency of many optimization algorithms when second derivatives of the objective function influence the required values of the variables. They
Algorithms for unconstrained minimization without derivatives that form linear or quadratic models by interpolation to values of the objective function are considered, because numerical experiments show that they are often more efficient than full quadRatic models for general objective functions.
• Computer Science
• 2017
This work presents an algorithm for solving constrained optimization problems that does not make explicit use of the objective function derivatives, and proves that the full steps are efficient in the sense that near a feasible nonstationary point, the decrease in the objectivefunction is relatively large, ensuring the global convergence results of the algorithm.
It is proved that, if F is bounded below, if ∇2F is also bounded, and if the number of iterations is infinite, then the sequence of gradients of F, k, converges to zero, where \$\underline{x}_{\,k}\$ is the centre of the trust region of the k-th iteration.
• Computer Science
Optim. Methods Softw.
• 2016
This work exploits techniques developed for derivative-free optimization (DFO) to obtain a method that can also be used to solve problems where the derivatives are unavailable or are available at a prohibitive cost and compares favourably to other well-known model-based algorithms for DFO.
A new model-based trust-region derivative-free optimization algorithm which can handle nonlinear equality constraints by applying a sequential quadratic programming (SQP) approach is presented and the implementation of such a method can be enhanced to outperform well-known DFO packages on smooth equality-constrained optimization problems.
• Mathematics
Math. Program.
• 2002
Abstract.A new method for derivative-free optimization is presented. It is designed for solving problems in which the objective function is smooth and the number of variables is moderate, but the

## References

SHOWING 1-10 OF 10 REFERENCES

The simplex algorithm of Nelder and Mead is extended to handle nonlinear optimization problems with constraints, and a delayed reflection is introduced to prevent the simplex from collapsing into a subspace near the constraints.
• Biology
• 1962
A technique for empirical optimisation is presented in which a sequence of experimental designs each in the form of a regular or irregular simplex is used, each simplex having all vertices but one in
• Mathematics
Comput. J.
• 1965
A method is described for the minimization of a function of n variables, which depends on the comparison of function values at the (n 41) vertices of a general simplex, followed by the replacement of
The aim of this book is to provide a Discussion of Constrained Optimization and its Applications to Linear Programming and Other Optimization Problems.
• Computer Science
• 1961
A numerical example, a model construction example, and a description of a particular existing computer system are included in order to clarify the mode of operation of the method.
The purpose of this note is to point out how an interested mathematical programmer could obtain computer programs of more than 120 constrained nonlinear programming problems which have been used in the past to test and compare optimization codes.
• Computer Science
• 1980
The purpose of this note is to point out how an interested mathematical programmer could obtain computer programs of more than 120 constrained nonlinear programming problems which have been used in the past to test and compare optimization codes.