A Simplex Method for Function Minimization

@article{Nelder1965ASM,
  title={A Simplex Method for Function Minimization},
  author={John A. Nelder and Roger Mead},
  journal={Comput. J.},
  year={1965},
  volume={7},
  pages={308-313}
}
A method is described for the minimization of a function of n variables, which depends on the comparison of function values at the (n 41) vertices of a general simplex, followed by the replacement of the vertex with the highest value by another point. [] Key Method The method is shown to be effective and computationally compact. A procedure is given for the estimation of the Hessian matrix in the neighbourhood of the minimum, needed in statistical estimation problems.

Figures and Tables from this paper

An improved simplex method for function minimization

  • Yuguang HuangW. Mccoll
  • Computer Science
    1996 IEEE International Conference on Systems, Man and Cybernetics. Information Intelligence and Systems (Cat. No.96CH35929)
  • 1996
TLDR
An improved method based on Nelder and Mead's simplex method (1965) is described for unconstrained function minimization, which reflects a more reasonable descendant search nature of thesimplex method.

A New Method of Constrained Optimization and a Comparison With Other Methods

A new method for finding the maximum of a general non-linear function of several variables within a constrained region is described, and shown to be efficient compared with existing methods when the

A new method of constrained optimization and a comparison with other methods

A new method for finding the maximum of a general non-linear function of several variables within a constrained region is described, and shown to be efficient compared with existing methods when the

The Nelder-Mead Simplex Procedure for Function Minimization

The Nelder-Mead simplex method for function minimization is a “direct” method requiring no derivatives. The objective function is evaluated at the vertices of a simplex, and movement is away from the

Global Optimization for Imprecise Problems

TLDR
A new method for the computation of the global minimum of a continuously differentiable real—valued function f of n variables is presented and can be successfully applied to problems with imprecise function and gradient values.

Location of saddle points and minimum energy paths by a constrained simplex optimization procedure

Two methods are proposed, one for the location of saddle points and one for the calculation of steepest-descent paths on multidimensional surfaces. Both methods are based on a constrained simplex

Refined simplex fitting method

The simplex fitting method makes use of a geometrical figure that finds the minimum variance value in successive steps. It was developed from the idea of finding the minimum value of a function. A

Numerical optimization and surface estimation with imprecise function evaluations

TLDR
The present work attempts to classify both problems and algorithmic tools in an effort to prescribe suitable techniques in a variety of situations to minimize functions of several parameters where the function need not be computed precisely.

Simplex Optimization and Its Applicability for Solving Analytical Problems

Formulation of the simplex matrix referred to n-D space, is presented in terms of the scalar product of vectors, known from elementary algebra. The principles of a simplex optimization procedure

A Derivative Free Minimization Method For Noisy Functions

TLDR
An unconstrained minimization method which is based on Powell's derivative free method is presented and it can be successfully applied to problems with imprecise function values.
...

References

SHOWING 1-5 OF 5 REFERENCES

A Rapidly Convergent Descent Method for Minimization

TLDR
A number of theorems are proved to show that it always converges and that it converges rapidly, and this method has been used to solve a system of one hundred non-linear simultaneous equations.

An Iterative Method for Finding Stationary Values of a Function of Several Variables

TLDR
An iterative method which is not unlike the conjugate gradient method of Hestenes and Stiefel (1952), and which finds stationary values of a general function, which has second-order convergence.

Sequential Application of Simplex Designs in Optimisation and Evolutionary Operation

A technique for empirical optimisation is presented in which a sequence of experimental designs each in the form of a regular or irregular simplex is used, each simplex having all vertices but one in