Multiplier and gradient methods

@article{Hestenes1969MultiplierAG,
  title={Multiplier and gradient methods},
  author={Magnus R. Hestenes},
  journal={Journal of Optimization Theory and Applications},
  year={1969},
  volume={4},
  pages={303-320}
}
  • M. Hestenes
  • Published 1 November 1969
  • Computer Science
  • Journal of Optimization Theory and Applications
The main purpose of this paper is to suggest a method for finding the minimum of a functionf(x) subject to the constraintg(x)=0. The method consists of replacingf byF=f+λg+1/2cg2, wherec is a suitably large constant, and computing the appropriate value of the Lagrange multiplier. Only the simplest algorithm is presented. The remaining part of the paper is devoted to a survey of known methods for finding unconstrained minima, with special emphasis on the various gradient techniques that are… 
On the method of multipliers for convex programming
TLDR
It is shown that for convex programming problems the method of multipliers for constrained minimization converges globally for a wide range of possible stepsizes and this fact is proved for both cases where unconstrained minimization is exact and approximate.
A gradient projection-multiplier method for nonlinear programming
This paper describes a gradient projection-multiplier method for solving the general nonlinear programming problem. The algorithm poses a sequence of unconstrained optimization problems which are
A geometric method in nonlinear programming
TLDR
It is shown that the conjugate-gradient algorithm can take advantage of the sparse structure of the problem in the computation of a vector field, which constitutes the main computational task in the methods.
On the method of multipliers for mathematical programming problems
In this paper, the numerical solution of the basic problem of mathematical programming is considered. This is the problem of minimizing a functionf(x) subject to a constraint ϕ(x)=0. Here,f is a
Algorithms for nonlinear constraints that use lagrangian functions
TLDR
A view of the progress and understanding that has been achieved during the last eight years about Lagrangian functions and its relevance to practical algorithms is given.
Implementing proximal point methods for linear programming
We describe the application of proximal point methods to the linear programming problem. Two basic methods are discussed. The first, which has been investigated by Mangasarian and others, is
Approximation procedures based on the method of multipliers
TLDR
A method for solving certain optimization problems with constraints, nondifferentiabilities, and other ill-conditioning terms in the cost functional by approximating them by well-behaved optimization problems by based on methods of multipliers.
Local convergence of the diagonalized method of multipliers
In this study, we consider a modification of the method of multipliers of Hestenes and Powell in which the iteration is diagonalized, that is, only a fixed finite number of iterations of Newton's
Method of dual matrices for function minimization
In this paper, the method of dual matrices for the minimization of functions is introduced. The method, which is developed on the model of a quadratic function, is characterized by two matrices at
...
1
2
3
4
5
...

References

SHOWING 1-9 OF 9 REFERENCES
Properties of the conjugate-gradient and Davidon methods
Two quadratically convergent gradient methods for minimizing an unconstrained function of several variables are examined. The heart of the Fletcher and Powell reformulation of Davidon's method is a
Methods of conjugate gradients for solving linear systems
TLDR
An iterative algorithm is given for solving a system Ax=k of n linear equations in n unknowns and it is shown that this method is a special case of a very general method which also includes Gaussian elimination.
A Rapidly Convergent Descent Method for Minimization
TLDR
A number of theorems are proved to show that it always converges and that it converges rapidly, and this method has been used to solve a system of one hundred non-linear simultaneous equations.
A GENERAL PROBLEM IN THE CALCULUS OF VARIATIONS WITH APPLICATIONS TO PATHS OF LEAST TIME
Abstract : A method based on the optimality criterion approach is presented to design a minimum weight structure with constraints on system stability. The stability constraints are stated with the
On a new computing technique in optimal control and its application to minimal-time flight profile optimization
A new constructive approach to optimization problems for dynamic systems that avoids having to solve dynamic equations and has both computational and theoretical advantages is presented. On the
Iterative computational methods