Local convergence of tensor methods

@article{Doikov2019LocalCO,
  title={Local convergence of tensor methods},
  author={Nikita Doikov and Yurii Nesterov},
  journal={Mathematical Programming},
  year={2019},
  pages={1-22}
}
In this paper, we study local convergence of high-order Tensor Methods for solving convex optimization problems with composite objective. We justify local superlinear convergence under the assumption of uniform convexity of the smooth component, having Lipschitz-continuous high-order derivative. The convergence both in function value and in the norm of minimal subgradient is established. Global complexity bounds for the Composite Tensor Method in convex and uniformly convex cases are also… Expand
Contracting Proximal Methods for Smooth Convex Optimization
TLDR
This paper proposes new accelerated methods for smooth Convex Optimization, called Contracting Proximal Methods, and provides global convergence analysis for a general scheme admitting inexactness in solving the auxiliary subproblem. Expand
Optimal Combination of Tensor Optimization Methods
TLDR
A general framework allowing to obtain near-optimal oracle complexity for each function in the sum separately is proposed, meaning, in particular, that the oracle for a function with lower Lipschitz constant is called a smaller number of times. Expand
Mathematical Optimization Theory and Operations Research: 19th International Conference, MOTOR 2020, Novosibirsk, Russia, July 6–10, 2020, Revised Selected Papers
In the Max k-Edge-Colored Clustering problem (abbreviated as MAX-k-EC) we are given an undirected graph and k colors. Each edge of the graph has a color and a nonnegative weight. The goal is to colorExpand
Inexact Tensor Methods with Dynamic Accuracies
TLDR
This paper presents computational results on a variety of machine learning problems for several methods and different accuracy policies, and proposes two dynamic strategies for choosing the inner accuracy of inexact Tensor Methods for solving convex optimization problems with composite objective. Expand
A Control-Theoretic Perspective on Optimal High-Order Optimization
TLDR
A control-theoretic perspective on optimal tensor algorithms for minimizing a convex function in a finite-dimensional Euclidean space is provided. Expand
Variants of the A-HPE and large-step A-HPE algorithms for strongly convex problems with applications to accelerated high-order tensor methods
For solving strongly convex optimization problems, we propose and study the global convergence of variants of the accelerated hybrid proximal extragradient (A-HPE) and large-step AHPE algorithms ofExpand
Convex optimization based on global lower second-order models
TLDR
This paper proves the global rate of convergence in functional residual, where $k$ is the iteration counter, minimizing convex functions with Lipschitz continuous Hessian, significantly improves the previously known bound $\mathcal{O}(1/k)$ for this type of algorithms. Expand
General higher-order majorization-minimization algorithms for (non)convex optimization.
Majorization-minimization algorithms consist of successively minimizing a sequence of upper bounds of the objective function so that along the iterations the objective function decreases. Such aExpand
Near-Optimal Hyperfast Second-Order Method for Convex Optimization
In this paper, we present a new Hyperfast Second-Order Method with convergence rate $O(N^{-5})$ up to a logarithmic factor for the convex function with Lipshitz the third derivative. This methodExpand

References

SHOWING 1-10 OF 30 REFERENCES
Cubic regularization of Newton method and its global performance
TLDR
This paper provides theoretical analysis for a cubic regularization of Newton method as applied to unconstrained minimization problem and proves general local convergence results for this scheme. Expand
Minimizing Uniformly Convex Functions by Cubic Regularization of Newton Method
TLDR
An intuitively plausible result is justified that the global iteration complexity of the Newton method is always better than that of the gradient method on the class of strongly convex functions with uniformly bounded second derivative. Expand
Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization
TLDR
This work shows that both the basic proximal-gradient method and the accelerated proximal - gradient method achieve the same convergence rate as in the error-free case, provided that the errors decrease at appropriate rates. Expand
A Superlinearly-Convergent Proximal Newton-type Method for the Optimization of Finite Sums
TLDR
A new incremental method whose convergence rate is superlinear - the Newton-type incremental method (NIM), which is to introduce a model of the objective with the same sum-of-functions structure and further update a single component of the model per iteration. Expand
Regularized Newton Methods for Minimizing Functions with Hölder Continuous Hessians
TLDR
This paper studied the regularized second-order methods for unconstrained minimization of a twice-differentiable (convex or nonconveX) objective function and introduced two new line-search acceptance criteria, which can be seen as generalizations of the Armijo condition. Expand
Implementable tensor methods in unconstrained convex optimization
  • Y. Nesterov
  • Computer Science, Mathematics
  • Math. Program.
  • 2021
TLDR
New tensor methods for unconstrained convex optimization, which solve at each iteration an auxiliary problem of minimizing convex multivariate polynomial, and an efficient technique for solving the auxiliary problem, based on the recently developed relative smoothness condition are developed. Expand
Accelerating the cubic regularization of Newton’s method on convex problems
  • Y. Nesterov
  • Mathematics, Computer Science
  • Math. Program.
  • 2008
TLDR
An accelerated version of the cubic regularization of Newton’s method that converges for the same problem class with order, keeping the complexity of each iteration unchanged and arguing that for the second-order schemes, the class of non-degenerate problems is different from the standard class. Expand
Proximal Newton-Type Methods for Minimizing Composite Functions
TLDR
Newton-type methods for minimizing smooth functions are generalized to handle a sum of two convex functions: a smooth function and a nonsmooth function with a simple proximal mapping, which yields new convergence results for some of these methods. Expand
A UNIFIED FRAMEWORK FOR SOME INEXACT PROXIMAL POINT ALGORITHMS
We present a unified framework for the design and convergence analysis of a class of algorithms based on approximate solution of proximal point subproblems. Our development further enhances theExpand
Inexact and accelerated proximal point algorithms
TLDR
It is shown that the convergence rate of the exact accelerated algorithm 1/k2 can be recovered by constraining the errors to be of a certain type, and the strategy proposed in [14] for generating estimate sequences according to the definition of Nesterov is generalized. Expand
...
1
2
3
...