On search directions for minimization algorithms

  title={On search directions for minimization algorithms},
  author={M. J. D. Powell},
  journal={Mathematical Programming},
  • M. Powell
  • Published 1 December 1973
  • Mathematics, Computer Science
  • Mathematical Programming
Some examples are given of differentiable functions of three variables, having the property that if they are treated by the minimization algorithm that searches along the coordinate directions in sequence, then the search path tends to a closed loop. On this loop the gradient of the objective function is bounded away from zero. We discuss the relevance of these examples to the problem of proving general convergence theorems for minimization algorithms that use search directions. 
On the convergence of sequential minimization algorithms
This note discusses the conditions for convergence of algorithms for finding the minimum of a function of several variables which are based on solving a sequence of one-variable minimization
On the convergence of a class of algorithms using linearly independent search directions
A more general class of algorithms for unconstrained minimization is considered, and their convergence under the assumption that the objective function has a unique minimum along any line is established.
A Variation on a Random Coordinate Minimization Method for Constrained Polynomial Optimization
The proposed algorithm is a variation on the random coordinate descent, in which transverse steps are sometimes taken, and appears to be promising for tackling nonlinear control problems in which the standard sum-of-squares methods may fail due to the problem size.
The 2-Coordinate Descent Method for Solving Double-Sided Simplex Constrained Minimization Problems
  • A. Beck
  • Mathematics, Computer Science
    J. Optim. Theory Appl.
  • 2014
This paper considers the problem of minimizing a continuously differentiable function with a Lipschitz continuous gradient subject to a single linear equality constraint and additional bound
On the Convergence of a New Conjugate Gradient Algorithm
This paper studies the convergence of a conjugate gradient algorithm proposed in a recent paper by Shanno. It is shown that under loose step length criteria similar to but slightly different from
Riemannian Optimization Algorithms and Their Applications to Numerical Linear Algebra
Preface Optimization is minimization or maximization of a given real-valued function with or without some constraints on its independent variables. For optimization problems with continuous
Coordinate descent algorithms
A certain problem structure that arises frequently in machine learning applications is shown, showing that efficient implementations of accelerated coordinate descent algorithms are possible for problems of this type.
On the convergence of the coordinate descent method for convex differentiable minimization
The coordinate descent method enjoys a long history in convex differentiable minimization. Surprisingly, very little is known about the convergence of the iterates generated by this method.
Direct search algorithms for optimization calculations
Line search methods, the restriction of vectors of variables to discrete grids, the use of geometric simplices, conjugate direction procedures, trust region algorithms that form linear or quadratic approximations to the objective function, and simulated annealing are addressed.
Block Coordinate Descent for smooth nonconvex constrained minimization
At each iteration of a Block Coordinate Descent method one minimizes an approximation of the objective function with respect to a generally small set of variables subject to constraints in which