# On search directions for minimization algorithms

@article{Powell1973OnSD, title={On search directions for minimization algorithms}, author={M. J. D. Powell}, journal={Mathematical Programming}, year={1973}, volume={4}, pages={193-201} }

Some examples are given of differentiable functions of three variables, having the property that if they are treated by the minimization algorithm that searches along the coordinate directions in sequence, then the search path tends to a closed loop. On this loop the gradient of the objective function is bounded away from zero. We discuss the relevance of these examples to the problem of proving general convergence theorems for minimization algorithms that use search directions.

## Topics from this paper

## 294 Citations

On the convergence of sequential minimization algorithms

- Mathematics
- 1973

This note discusses the conditions for convergence of algorithms for finding the minimum of a function of several variables which are based on solving a sequence of one-variable minimization…

On the convergence of a class of algorithms using linearly independent search directions

- Mathematics, Computer ScienceMath. Program.
- 1980

A more general class of algorithms for unconstrained minimization is considered, and their convergence under the assumption that the objective function has a unique minimum along any line is established.

A Variation on a Random Coordinate Minimization Method for Constrained Polynomial Optimization

- Computer Science, MathematicsIEEE Control Systems Letters
- 2018

The proposed algorithm is a variation on the random coordinate descent, in which transverse steps are sometimes taken, and appears to be promising for tackling nonlinear control problems in which the standard sum-of-squares methods may fail due to the problem size.

The 2-Coordinate Descent Method for Solving Double-Sided Simplex Constrained Minimization Problems

- Mathematics, Computer ScienceJ. Optim. Theory Appl.
- 2014

This paper considers the problem of minimizing a continuously differentiable function with a Lipschitz continuous gradient subject to a single linear equality constraint and additional bound…

On the Convergence of a New Conjugate Gradient Algorithm

- Mathematics
- 1978

This paper studies the convergence of a conjugate gradient algorithm proposed in a recent paper by Shanno. It is shown that under loose step length criteria similar to but slightly different from…

Riemannian Optimization Algorithms and Their Applications to Numerical Linear Algebra

- Mathematics
- 2013

Preface Optimization is minimization or maximization of a given real-valued function with or without some constraints on its independent variables. For optimization problems with continuous…

Coordinate descent algorithms

- Computer Science, MathematicsMath. Program.
- 2015

A certain problem structure that arises frequently in machine learning applications is shown, showing that efficient implementations of accelerated coordinate descent algorithms are possible for problems of this type.

On the convergence of the coordinate descent method for convex differentiable minimization

- Mathematics
- 1992

The coordinate descent method enjoys a long history in convex differentiable minimization. Surprisingly, very little is known about the convergence of the iterates generated by this method.…

Direct search algorithms for optimization calculations

- Computer Science
- 1998

Line search methods, the restriction of vectors of variables to discrete grids, the use of geometric simplices, conjugate direction procedures, trust region algorithms that form linear or quadratic approximations to the objective function, and simulated annealing are addressed.

Block Coordinate Descent for smooth nonconvex constrained minimization

- Mathematics
- 2021

At each iteration of a Block Coordinate Descent method one minimizes an approximation of the objective function with respect to a generally small set of variables subject to constraints in which…

## References

SHOWING 1-3 OF 3 REFERENCES

An Automatic Method for Finding the Greatest or Least Value of a Function

- Computer ScienceComput. J.
- 1960