# A more direct and better variant of New Q-Newton's method Backtracking for m equations in m variables

@article{Truong2021AMD, title={A more direct and better variant of New Q-Newton's method Backtracking for m equations in m variables}, author={Tuyen Trung Truong}, journal={ArXiv}, year={2021}, volume={abs/2110.07403} }

In some (joint) recent papers, the authors have developed a new family of modifications of Newton’s method, for which Backtracking line search can be incorporated, for optimization. The new method, called New Q-Newton’s method (and its Backtracking version), has good theoretical guarantee (concerning convergence to critical points, avoidance of saddle points and rate of convergence). This method can be used to solve a system of equations g1 = . . . = gN = 0, by applying to the function f = g 1…

## References

SHOWING 1-10 OF 23 REFERENCES

Iterative methods for optimization

- Mathematics, Computer ScienceFrontiers in applied mathematics
- 1999

Iterative Methods for Optimization does more than cover traditional gradient-based optimization: it is the first book to treat sampling methods, including the Hooke& Jeeves, implicit filtering, MDS, and Nelder& Mead schemes in a unified way.

On the Quadratic Convergence of the Levenberg-Marquardt Method without Nonsingularity Assumption

- Mathematics, Computer ScienceComputing
- 2004

If ||F(x)|| provides a local error bound for the system of nonlinear equations F(x)=0, it is shown that the sequence {xk} generated by the new method converges to a solution quadratically, which is stronger than dist(xk,X*)→0 given by Yamashita and Fukushima.

Local convergence of the Levenberg–Marquardt method under Hölder metric subregularity

- Mathematics, Computer ScienceAdv. Comput. Math.
- 2019

An adaptive formula for the Levenberg–Marquardt parameter is proposed and the local convergence of the method is analyzed under Hölder metric subregularity of the function defining the equation and H Ölder continuity of its gradient mapping.

Modified inexact Levenberg–Marquardt methods for solving nonlinear least squares problems

- Computer Science, MathematicsComput. Optim. Appl.
- 2019

Preliminary numerical experiments show that the proposed modified inexact Levenberg–Marquardt method and its global version outperforms the classical inexact LMM on nonlinear least squares problems (NLSP), especially for the underdetermined case.

On the Rate of Convergence of the Levenberg-Marquardt Method

- Mathematics
- 2001

We consider a rate of convergence of the Levenberg-Marquardt method (LMM) for solving a system of nonlinear equations F(x) = 0, where F is a mapping from Rn into Rm. It is well-known that LMM has a…

Communication: Newton homotopies for sampling stationary points of potential energy landscapes.

- Mathematics, PhysicsThe Journal of chemical physics
- 2014

This work proposes an efficient implementation of Newton homotopies, which can sample a large number of the stationary points of complicated many-body potentials and demonstrates how the procedure works by applying it to the nearest-neighbor ϕ(4) model and atomic clusters.

On a problem posed by Steve Smale

- Mathematics
- 2009

The 17th of the problems proposed by Steve Smale for the 21st century asks for the existence of a deterministic algorithm computing an approximate solution of a system of n complex polynomials in n…

Every algebraic set inn-space is the intersection ofn hypersurfaces

- Mathematics
- 1973

The Göttingen State and University Library provides access to digitized documents strictly for noncommercial educational, research and private purposes and makes no warranty with regard to their use…

Smale’s 17th problem: Average polynomial time to compute affine and projective solutions

- Mathematics
- 2008

Smale’s 17th Problem asks: “Can a zero of n complex polynomial equations in n unknowns be found approximately, on the average, in polynomial time with a uniform algorithm?”. We give a positive answer…

Some convergent results for Backtracking Gradient Descent method on Banach spaces

- Physics, Computer ScienceArXiv
- 2020

The main result concerns the following condition, which is that whenever whenever $\{x_n\}$ weakly converges to $x$ and $\lim _{n\rightarrow\infty}||\nabla f(x)|=0', then $f(x)=0$.