• Corpus ID: 229924381

Explicit continuation methods and preconditioning techniques for unconstrained optimization problems

@article{Luo2020ExplicitCM,
  title={Explicit continuation methods and preconditioning techniques for unconstrained optimization problems},
  author={Xin-long Luo and Hang Xiao and Jia-hui Lv and Sen Zhang},
  journal={ArXiv},
  year={2020},
  volume={abs/2012.14808}
}
This paper considers an explicit continuation method and the trust-region time-stepping scheme for the unconstrained optimization problem. Moreover, in order to improve its computational efficiency and robustness, the new method uses the switching preconditioning technique between the L-BFGS method and the inverse of the Hessian matrix. In the well-conditioned phase, the new method uses the L-BFGS method as the preconditioning technique in order to improve its computational efficiency… 

Tables from this paper

References

SHOWING 1-10 OF 46 REFERENCES
Continuation methods with the trusty time-stepping scheme for linearly constrained optimization with noisy data
TLDR
A continuation method with the trusty time-stepping scheme for the linearly equality-constrained optimization problem at every sampling time that can save much more computational time than the sequential quadratic programming (SQP) method.
Trust Region Algorithms and Timestep Selection
  • D. Higham
  • Mathematics, Computer Science
    SIAM J. Numer. Anal.
  • 1999
TLDR
This work contributes to the theory of gradient stability by presenting an algorithm that reproduces the correct global dynamics and gives very rapid local convergence to a stable steady state.
Combining Trust-Region Techniques and Rosenbrock Methods to Compute Stationary Points
Rosenbrock methods are popular for solving a stiff initial-value problem of ordinary differential equations. One advantage is that there is no need to solve a nonlinear equation at every iteration,
Construction of high order diagonally implicit multistage integration methods for ordinary differential equations
Abstract The identification of high order diagonally implicit multistage integration methods with appropriate stability properties requires the solution of high dimensional nonlinear equation
A class of methods for unconstrained minimization based on stable numerical integration techniques
Abstract Recently a number of algorithms have been derived which minimize a nonlinear function by iteratively constructing a monotonically improving sequence of approximate minimizers along
Recent advances in trust region algorithms
TLDR
Recent results on trust region methods for unconstrained optimization, constrained optimization, nonlinear equations and nonlinear least squares, nonsmooth optimization and optimization without derivatives are reviewed.
Widely convergent method for finding multiple solutions of simultaneous nonlinear equations
A new method has been developed for solving a system of nonlinear equations g(x) = 0. This method is based on solving the related system of differential equations dg/dt±g(x)= 0 where in the sign is
A geometric method in nonlinear programming
A differential geometric approach to the constrained function maximization problem is presented. The continuous analogue of the Newton-Raphson method due to Branin for solving a system of nonlinear
Global Convergence of a Cass of Quasi-Newton Methods on Convex Problems
We study the global convergence properties of the restricted Broyden class of quasi-Newton methods, when applied to a convex objective function. We assume that the line search satisfies a standard
CONVERGENCE PROPERTIES OF A CLASS OF MINIMIZATION ALGORITHMS
Many iterative algorithms for minimizing a function F (x) = F (x1, x2, …, xn) require first derivatives of F(x) to be calculated, but they maintain an approximation to the second derivative matrix
...
1
2
3
4
5
...