• Corpus ID: 244908775

Gradient Regularization of Newton Method with Bregman Distances

@inproceedings{Doikov2021GradientRO,
  title={Gradient Regularization of Newton Method with Bregman Distances},
  author={Nikita Doikov and Yurii Nesterov},
  year={2021}
}
In this paper, we propose a first second-order scheme based on arbitrary non-Euclidean norms, incorporated by Bregman distances. They are introduced directly in the Newton iterate with regularization parameter proportional to the square root of the norm of the current gradient. For the basic scheme, as applied to the composite optimization problem, we establish the global convergence rate of the orderO(k−2) both in terms of the functional residual and in the norm of subgradients. Our main… 
SC-Reg: Training Overparameterized Neural Networks under Self-Concordant Regularization
TLDR
The generalized Gauss-Newton with SelfConcordant Regularization (SCoRe-GGN) algorithm that updates the network parameters each time it receives a new input batch and exploits the structure of the second-order information in the Hessian matrix, thereby reducing the training computational overhead.

References

SHOWING 1-5 OF 5 REFERENCES
Contracting Proximal Methods for Smooth Convex Optimization
TLDR
This paper proposes new accelerated methods for smooth Convex Optimization, called Contracting Proximal Methods, and provides global convergence analysis for a general scheme admitting inexactness in solving the auxiliary subproblem.
Cubic regularization of Newton method and its global performance
TLDR
This paper provides theoretical analysis for a cubic regularization of Newton method as applied to unconstrained minimization problem and proves general local convergence results for this scheme.
Lectures on Convex Optimization
TLDR
This website has lectures on convex optimization to read, not just read, however likewise download them and even read online, as well as obtain the data in the types of txt, zip, kindle, word, ppt, pdf, aswell as rar.
New second-order and tensor methods in Convex Optimization
  • PhD thesis, Université catholique de Louvain,
  • 2021
Regularized Newton Method with Global O(1/k2) Convergence
TLDR
A Newton-type method that converges fast from any initialization and for arbitrary convex objectives with Lipschitz Hessians is presented, and it is proved that locally the method converges superlinearly when the objective is strongly convex.