The convergence of variable metric matrices in unconstrained optimization

  title={The convergence of variable metric matrices in unconstrained optimization},
  author={Renpu Ge and M. J. D. Powell},
  journal={Mathematical Programming},
  • R. Ge, M. Powell
  • Published 1 October 1983
  • Mathematics, Computer Science
  • Mathematical Programming
It is proved that, if the DFP or BFGS algorithm with step-lengths of one is applied to a functionF(x) that has a Lipschitz continuous second derivative, and if the calculated vectors of variables converge to a point at which ∇F is zero and ∇2F is positive definite, then the sequence of variable metric matrices also converges. The limit of this sequence is identified in the case whenF(x) is a strictly convex quadratic function. 
Convergence of quasi-Newton matrices generated by the symmetric rank one update
Conditions under which these approximations can be proved to converge globally to the true Hessian matrix are given, in the case where the Symmetric Rank One update formula is used.
The global convergence of partitioned BFGS on problems with convex decompositions and Lipschitzian gradients
  • A. Griewank
  • Mathematics, Computer Science
    Math. Program.
  • 1991
The main purpose of this paper is the extension of Powell's (1976) global convergence result to the partitioned BFGS method introduced by Griewank and Toint (1982), and a damping of the BFGS update that becomes inactive if the problem turns out to be regular nearx*.
The convergence of matrices generated by rank-2 methods from the restricted β-class of Broyden
SummaryIt is shown that the matricesBk generated by any method from the restricted β-class of Broyden converge, if the method is applied to the unconstrained minimization of a functionf∈C2(Rn) with
Rates of convergence for secant methods on nonlinear problems in hilbert space
The numerical performance of iterative methods applied to discretized operator equations may depend strongly on their theoretical rate of convergence on the underlying problem g(x)=0 in Hilbert
Solving reachability problems by a scalable constrained optimization method
This paper investigates the problem of finding an evolution of a dynamical system that originates and terminates in given sets of states and finds a scalable approach for solving it.
A Theoretical and Experimental Study of the Symmetric Rank-One Update
A new analysis is presented that shows that the SRi method with a line search is $( n + 1)$-step q-superlinearly convergent without the assumption of linearly independent iterates.
Sequential quadratic programming with indefinite Hessian approximations for nonlinear optimum experimental design for parameter estimation in differential–algebraic equations
In this thesis we develop algorithms for the numerical solution of problems from nonlinear optimum experimental design (OED) for parameter estimation in differential–algebraic equations. These OED
Convergence properties of the Broyden-like method for mixed linear-nonlinear systems of equations
  • F. Mannel
  • Computer Science, Mathematics
    Numer. Algorithms
  • 2021
This is the first time that convergence of the Broyden-like matrices is proven for n > 1, albeit for a special case only and the subspace property of the iterates belongs to an affine subspace is used.
Greedy and Random Broyden's Methods with Explicit Superlinear Convergence Rates in Nonlinear Equations
This work proposes the greedy and random Broyden’s method for solving nonlinear equations, and establishes explicit (local) superlinear convergence rates of both methods if the initial point and approximate Jacobian are close enough to a solution and corresponding Jacobian.
Extra-Updates Criterion for the Limited Memory BFGS Algorithm for Large Scale Nonlinear Optimizatio
  • M. Al-Baali
  • Computer Science, Mathematics
    J. Complex.
  • 2002
The presented numerical results illustrate the usefulness of this criterion and show that extra updates improve the performance of the L-BFGS method substantially.


On the Convergence of the Variable Metric Algorithm
The variable metric algorithm is a frequently used method for calculating the least value of a function of several variables. However it has been proved only that the method is successful if the
The given theory helps to explain the excellent numerical results that are obtained by a recent algorithm (Powell, 1977) by regarding the positive definite matrix that is revised on each iteration as an approximation to the second derivative matrix of the Lagrangian function.
The Convergence of a Class of Double-rank Minimization Algorithms 1. General Considerations
This paper presents a more detailed analysis of a class of minimization algorithms, which includes as a special case the DFP (Davidon-Fletcher-Powell) method, than has previously appeared and investigates how the successive errors depend, again for quadratic functions, upon the initial choice of iteration matrix.
Quasi-Newton Methods, Motivation and Theory
This paper is an attempt to motivate and justify quasi-Newton methods as useful modifications of Newton''s method for general and gradient nonlinear systems of equations. References are given to
The algebraic eigenvalue problem
Theoretical background Perturbation theory Error analysis Solution of linear algebraic equations Hermitian matrices Reduction of a general matrix to condensed form Eigenvalues of matrices of
A New Approach to Variable Metric Algorithms
On the Local and Superlinear Convergence of Quasi-Newton Methods