# A Direct Search Optimization Method That Models the Objective and Constraint Functions by Linear Interpolation

```@inproceedings{Powell1994ADS,
title={A Direct Search Optimization Method That Models the Objective and Constraint Functions by Linear Interpolation},
author={M. J. D. Powell},
year={1994}
}```
An iterative algorithm is proposed for nonlinearly constrained optimization calculations when there are no derivatives. Each iteration forms linear approximations to the objective and constraint functions by interpolation at the vertices of a simplex and a trust region bound restricts each change to the variables. Thus a new vector of variables is calculated, which may replace one of the current vertices, either to improve the shape of the simplex or because it is the best vector that has been…
1,015 Citations
• Computer Science
Computational and Applied Mathematics
• 2022
A trust-region method for the solution of constrained optimization problems where the gradients of the objective and constraints are not available, by the use of an exact penalty, which converges directly to a constrained optimum, once a sufficiently high penalty is selected.
Line search methods, the restriction of vectors of variables to discrete grids, the use of geometric simplices, conjugate direction procedures, trust region algorithms that form linear or quadratic approximations to the objective function, and simulated annealing are addressed.
Quadratic models are of fundamental importance to the efficiency of many optimization algorithms when second derivatives of the objective function influence the required values of the variables. They
Algorithms for unconstrained minimization without derivatives that form linear or quadratic models by interpolation to values of the objective function are considered, because numerical experiments show that they are often more efficient than full quadRatic models for general objective functions.
• Computer Science
• 2017
This work presents an algorithm for solving constrained optimization problems that does not make explicit use of the objective function derivatives, and proves that the full steps are efficient in the sense that near a feasible nonstationary point, the decrease in the objectivefunction is relatively large, ensuring the global convergence results of the algorithm.
It is proved that, if F is bounded below, if ∇2F is also bounded, and if the number of iterations is infinite, then the sequence of gradients of F, k, converges to zero, where \$\underline{x}_{\,k}\$ is the centre of the trust region of the k-th iteration.
• Computer Science
Optim. Methods Softw.
• 2016
This work exploits techniques developed for derivative-free optimization (DFO) to obtain a method that can also be used to solve problems where the derivatives are unavailable or are available at a prohibitive cost and compares favourably to other well-known model-based algorithms for DFO.
A new model-based trust-region derivative-free optimization algorithm which can handle nonlinear equality constraints by applying a sequential quadratic programming (SQP) approach is presented and the implementation of such a method can be enhanced to outperform well-known DFO packages on smooth equality-constrained optimization problems.
• Mathematics
Math. Program.
• 2002
Abstract.A new method for derivative-free optimization is presented. It is designed for solving problems in which the objective function is smooth and the number of variables is moderate, but the