An Optimal High-Order Tensor Method for Convex Optimization

@inproceedings{Jiang2019AnOH,
  title={An Optimal High-Order Tensor Method for Convex Optimization},
  author={B. Jiang and Haoyue Wang and Shuzhong Zhang},
  booktitle={COLT},
  year={2019}
}
This paper is concerned with finding an optimal algorithm for minimizing a composite convex objective function. The basic setting is that the objective is the sum of two convex functions: the first function is smooth with up to the d-th order derivative information available, and the second function is possibly non-smooth, but its proximal tensor mappings can be computed approximately in an efficient manner. The problem is to find -- in that setting -- the best possible (optimal) iteration… Expand
A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization
TLDR
The results partially address the problem of incorporating adaptive strategies into the high-order {\it accelerated} methods raised by Nesterov in (Nesterov-2018), although the strategies cannot assure the convexity of the auxiliary problem and such adaptive strategies are already popular in high- order nonconvex optimization. Expand
Optimal Combination of Tensor Optimization Methods
TLDR
A general framework allowing to obtain near-optimal oracle complexity for each function in the sum separately is proposed, meaning, in particular, that the oracle for a function with lower Lipschitz constant is called a smaller number of times. Expand
Optimal Tensor Methods in Smooth Convex and Uniformly ConvexOptimization
TLDR
A new tensor method is proposed, which closes the gap between the lower and upper iteration complexity bounds for convex optimization problems with the objective function having Lipshitz-continuous $p$-th order derivative, and it is shown that in practice it is faster than the best known accelerated Tensor method. Expand
On Adaptive Cubic Regularized Newton's Methods for Convex Optimization via Random Sampling
In this paper, we consider an unconstrained optimization model where the objective is a sum of a large number of possibly nonconvex functions, though overall the objective is assumed to be smooth andExpand
Adaptively Accelerating Cubic Regularized Newton's Methods for Convex Optimization via Random Sampling
In this paper, we consider an unconstrained optimization model where the objective is a sum of a large number of possibly nonconvex functions, though overall the objective is assumed to be smooth andExpand
High-order methods beyond the classical complexity bounds, I: inexact high-order proximal-point methods
In this paper, we introduce a Bi-level OPTimization (BiOPT) framework for minimizing the sum of two convex functions, where both can be nonsmooth. The BiOPT framework involves two levels ofExpand
Variants of the A-HPE and large-step A-HPE algorithms for strongly convex problems with applications to accelerated high-order tensor methods
For solving strongly convex optimization problems, we propose and study the global convergence of variants of the A-HPE and large-step A-HPE algorithms of Monteiro and Svaiter [18]. We prove linearExpand
Higher-order methods for convex-concave min-max optimization and monotone variational inequalities
TLDR
The results improve upon the iteration complexity of the first-order Mirror Prox method of Nemirovski and the second-order method of Monteiro and Svaiter and give improved convergence rates for constrained convex-concave min-max problems and monotone variational inequalities with higher-order smoothness. Expand
Highly smooth minimization of non-smooth problems
TLDR
The work goes beyond the previous O(ε−1) barrier in terms of ε dependence, and in the case of `∞ regression and `1-SVM, overall improvements for some parameter settings in the moderate-accuracy regime are established. Expand
Towards Unified Acceleration of High-Order Algorithms under Hölder Continuity and Uniform Convexity
TLDR
A concise unified acceleration framework (UAF) is proposed, which reconciles the two different high-order acceleration approaches, one by Nesterov and Baes and one by Monteiro and Svaiter, and is proposed directly in the general composite convex setting. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 36 REFERENCES
Implementable tensor methods in unconstrained convex optimization
  • Y. Nesterov
  • Computer Science, Mathematics
  • Math. Program.
  • 2021
TLDR
New tensor methods for unconstrained convex optimization, which solve at each iteration an auxiliary problem of minimizing convex multivariate polynomial, and an efficient technique for solving the auxiliary problem, based on the recently developed relative smoothness condition are developed. Expand
A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization
TLDR
The results partially address the problem of incorporating adaptive strategies into the high-order {\it accelerated} methods raised by Nesterov in (Nesterov-2018), although the strategies cannot assure the convexity of the auxiliary problem and such adaptive strategies are already popular in high- order nonconvex optimization. Expand
An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization
TLDR
An accelerated proximal gradient method is presented for problems where the smooth part of the objective function is also strongly convex, and this method incorporates an efficient line-search procedure, and achieves the optimal iteration complexity for such composite optimization problems. Expand
An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and Its Implications to Second-Order Methods
TLDR
This paper presents an accelerated variant of the hybrid proximal extragradient (H PE) method for convex optimization, referred to as the accelerated HPE (A-HPE) framework, as well as a special version of it, where a large stepsize condition is imposed. Expand
Relatively Smooth Convex Optimization by First-Order Methods, and Applications
TLDR
A notion of “relative smoothness” and relative strong convexity that is determined relative to a user-specified “reference function” $h(\cdot)$ (that should be computationally tractable for algorithms), and it is shown that many differentiable convex functions are relatively smooth with respect to a correspondingly fairly simple reference function. Expand
An optimal method for stochastic composite optimization
TLDR
The accelerated stochastic approximation (AC-SA) algorithm based on Nesterov’s optimal method for smooth CP is introduced, and it is shown that the AC-SA algorithm can achieve the aforementioned lower bound on the rate of convergence for SCO. Expand
Iteration-Complexity of a Newton Proximal Extragradient Method for Monotone Variational Inequalities and Inclusion Problems
TLDR
Both pointwise and ergodic iteration-complexity results are derived for the aforementioned first-order method using corresponding results obtained here for a subfamily of the HPE framework. Expand
Performance of first-order methods for smooth convex minimization: a novel approach
TLDR
A novel approach for analyzing the worst-case performance of first-order black-box optimization methods, which focuses on smooth unconstrained convex minimization over the Euclidean space and derives a new and tight analytical bound on its performance. Expand
Improved second-order evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
TLDR
This work establishes the novel complexity bound for second-order criticality under identical problem assumptions as for first-order, namely, that the $p$-th derivative tensor is Lipschitz continuous and that $f(x)$ is bounded from below. Expand
Iteration-complexity of a Rockafellar's proximal method of multipliers for convex programming based on second-order approximations
ABSTRACT This paper studies the iteration-complexity of a new primal-dual algorithm based on Rockafellar's proximal method of multipliers (PMM) for solving smooth convex programming problems withExpand
...
1
2
3
4
...