Corpus ID: 211252381

Inexact Tensor Methods with Dynamic Accuracies

@inproceedings{Doikov2020InexactTM,
  title={Inexact Tensor Methods with Dynamic Accuracies},
  author={Nikita Doikov and Yurii Nesterov},
  booktitle={ICML},
  year={2020}
}
In this paper, we study inexact high-order Tensor Methods for solving convex optimization problems with composite objective. At every step of such methods, we use approximate solution of the auxiliary problem, defined by the bound for the residual in function value. We propose two dynamic strategies for choosing the inner accuracy: the first one is decreasing as $1/k^{p + 1}$, where $p \geq 1$ is the order of the method and $k$ is the iteration counter, and the second approach is using for the… Expand

Figures and Tables from this paper

Affine-invariant contracting-point methods for Convex Optimization
TLDR
A general framework of Contracting-Point methods, which solve at each iteration an auxiliary subproblem re- stricting the smooth part of the objective function onto contraction of the initial domain, which provides a systematic way for developing optimization methods of different order, endowed with the global complexity bounds. Expand
Hyperfast Second-Order Local Solvers for Efficient Statistically Preconditioned Distributed Optimization
Statistical preconditioning can be used to design fast methods for distributed large-scale empirical risk minimization problems, for strongly convex and smooth loss functions, allowing fewerExpand
Tensor methods for finding approximate stationary points of convex functions
In this paper we consider the problem of finding $\epsilon$-approximate stationary points of convex functions that are $p$-times differentiable with $\nu$-H\"{o}lder continuous $p$th derivatives. WeExpand
Optimization Methods for Fully Composite Problems
In this paper, we propose a new Fully Composite Formulation of convex optimization problems. It includes, as a particular case, the problems with functional constraints, max-type minimizationExpand

References

SHOWING 1-10 OF 53 REFERENCES
Implementable tensor methods in unconstrained convex optimization
  • Y. Nesterov
  • Computer Science, Mathematics
  • Math. Program.
  • 2021
TLDR
New tensor methods for unconstrained convex optimization, which solve at each iteration an auxiliary problem of minimizing convex multivariate polynomial, and an efficient technique for solving the auxiliary problem, based on the recently developed relative smoothness condition are developed. Expand
A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization
TLDR
The results partially address the problem of incorporating adaptive strategies into the high-order {\it accelerated} methods raised by Nesterov in (Nesterov-2018), although the strategies cannot assure the convexity of the auxiliary problem and such adaptive strategies are already popular in high- order nonconvex optimization. Expand
A Stochastic Tensor Method for Non-convex Optimization
TLDR
A stochastic optimization method that uses a fourth-order regularized model to find local minima of smooth and potentially non-convex objective functions and its implementation relies on tensor-vector products only. Expand
On inexact solution of auxiliary problems in tensor methods for convex optimization
TLDR
This paper studies the auxiliary problems that appear in p-order tensor methods for unconstrained minimization of convex functions with ν-Hölder continuous pth derivatives and proves that the referred methods take at most iterations to find either a suitable approximate stationary point of the tensor model or an ε-approximate stationary points of the original objective function. Expand
Gradient methods for minimizing composite functions
  • Y. Nesterov
  • Mathematics, Computer Science
  • Math. Program.
  • 2013
In this paper we analyze several new methods for solving optimization problems with the objective function formed as a sum of two terms: one is smooth and given by a black-box oracle, and another isExpand
Inexact basic tensor methods
In this paper, we study local convergence of high-order Tensor Methods for solving convex optimization problems with composite objective. We justify local superlinear convergence under the assumptionExpand
Tensor Methods for Minimizing Functions with H\"{o}lder Continuous Higher-Order Derivatives
In this paper we study $p$-order methods for unconstrained minimization of convex functions that are $p$-times differentiable with $\nu$-Holder continuous $p$th derivatives. We propose tensor schemesExpand
Accelerating the cubic regularization of Newton’s method on convex problems
  • Y. Nesterov
  • Mathematics, Computer Science
  • Math. Program.
  • 2008
TLDR
An accelerated version of the cubic regularization of Newton’s method that converges for the same problem class with order, keeping the complexity of each iteration unchanged and arguing that for the second-order schemes, the class of non-degenerate problems is different from the standard class. Expand
Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization
TLDR
This work shows that both the basic proximal-gradient method and the accelerated proximal - gradient method achieve the same convergence rate as in the error-free case, provided that the errors decrease at appropriate rates. Expand
Sub-sampled Cubic Regularization for Non-convex Optimization
TLDR
This work provides a sampling scheme that gives sufficiently accurate gradient and Hessian approximations to retain the strong global and local convergence guarantees of cubically regularized methods, and is the first work that gives global convergence guarantees for a sub-sampled variant of cubic regularization on non-convex functions. Expand
...
1
2
3
4
5
...