Corpus ID: 208291444

Adaptive Catalyst for smooth convex optimization

@inproceedings{Ivanova2019AdaptiveCF,
  title={Adaptive Catalyst for smooth convex optimization},
  author={Anastasiya Ivanova and Dmitry Pasechnyuk and Dmitry Grishchenko and Egor Shulgin and Alexander V. Gasnikov},
  year={2019}
}
In 2015 there appears a universal framework Catalyst that allows to accelerate almost arbitrary non-accelerated deterministic and randomized algorithms for smooth convex optimization problems Lin et al. (2015). This technique finds a lot of applications in Machine Learning due to the possibility to deal with sum-type target functions. The significant part of the Catalyst approach is accelerated proximal outer gradient method. This method used as an envelope for non-accelerated inner algorithm… Expand

Figures from this paper

Accelerated meta-algorithm for convex optimization
TLDR
The proposed meta-algorithm is more general than the ones in the literature and allows to obtain better convergence rates and practical performance in several settings and nearly optimal methods for minimizing smooth functions with Lipschitz derivatives of an arbitrary order. Expand
Oracle Complexity Separation in Convex Optimization
TLDR
This work proposes a generic framework to combine optimal algorithms for different types of oracles in order to achieve separate optimal oracle complexity for each block, i.e. for eachBlock the corresponding oracle is called the optimal number of times for a given accuracy. Expand
Lower bounds for conditional gradient type methods for minimizing smooth strongly convex functions
In this paper, we consider conditional gradient methods. These are methods that use a linear minimization oracle, which, for a given vector $p \in \mathbb{R}^n$, computes the solution of theExpand
Near-Optimal Hyperfast Second-Order Method for Convex Optimization
In this paper, we present a new Hyperfast Second-Order Method with convergence rate $O(N^{-5})$ up to a logarithmic factor for the convex function with Lipshitz the third derivative. This methodExpand
Accelerated gradient sliding and variance reduction.
We consider sum-type strongly convex optimization problem (first term) with smooth convex not proximal friendly composite (second term). We show that the complexity of this problem can be split intoExpand
On the Computational Efficiency of Catalyst Accelerated Coordinate Descent
TLDR
A proximally accelerated coordinate descent method is proposed that achieves the efficient algorithmic complexity of iteration and allows taking advantage of the data sparseness and demonstrates a faster convergence in comparison with standard methods. Expand
Accelerated Proximal Envelopes: Application to the Coordinate Descent Method
Статья посвящена одному частному случаю применения универсальных ускоренных проксимальных оболочек для получения вычислительно эффективных ускоренных вариантов методов, использующихся для решенияExpand
Solving smooth min-min and min-max problems by mixed oracle algorithms
In this paper we consider two types of problems which have some similarity in their structure, namely, min-min problems and minmax saddle-point problems. Our approach is based on considering theExpand
Accelerated Gradient Sliding for Minimizing a Sum of Functions
Abstract We propose a new way of justifying the accelerated gradient sliding of G. Lan, which allows one to extend the sliding technique to a combination of an accelerated gradient method with anExpand
Contracting Proximal Methods for Smooth Convex Optimization
TLDR
This paper proposes new accelerated methods for smooth Convex Optimization, called Contracting Proximal Methods, and provides global convergence analysis for a general scheme admitting inexactness in solving the auxiliary subproblem. Expand
...
1
2
...

References

SHOWING 1-10 OF 36 REFERENCES
Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice
TLDR
This paper gives practical guidelines to use Catalyst and presents a comprehensive theoretical analysis of its global complexity, showing that Catalyst applies to a large class of algorithms, including gradient descent, block coordinate descent, incremental algorithms such as SAG, SAGA, SDCA, SVRG, Finito/MISO and their proximal variants. Expand
Catalyst Acceleration for Gradient-Based Non-Convex Optimization
We introduce a generic scheme to solve nonconvex optimization problems using gradient-based algorithms originally designed for minimizing convex functions. When the objective is convex, the proposedExpand
A Universal Catalyst for First-Order Optimization
TLDR
This work introduces a generic scheme for accelerating first-order optimization methods in the sense of Nesterov, which builds upon a new analysis of the accelerated proximal point algorithm, and shows that acceleration is useful in practice, especially for ill-conditioned problems where the authors measure significant improvements. Expand
Accelerated Alternating Minimization
TLDR
This work introduces an accelerated alternating minimization method with a $1/k^2 convergence rate, where $k$ is the iteration counter and applies it to the entropy regularized optimal transport problem and shows experimentally, that it outperforms Sinkhorn's algorithm. Expand
Accelerating Rescaled Gradient Descent: Fast Optimization of Smooth Functions
TLDR
A new first-order algorithm, called rescaled gradient descent (RGD), is introduced, and it is shown that RGD achieves a faster convergence rate than gradient descent provided the function is strongly smooth -- a natural generalization of the standard smoothness assumption on the objective function. Expand
Reachability of Optimal Convergence Rate Estimates for High-Order Numerical Convex Optimization Methods
The Monteiro–Svaiter accelerated hybrid proximal extragradient method (2013) with one step of Newton’s method used at every iteration for the approximate solution of an auxiliary problem isExpand
An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and Its Implications to Second-Order Methods
TLDR
This paper presents an accelerated variant of the hybrid proximal extragradient (H PE) method for convex optimization, referred to as the accelerated HPE (A-HPE) framework, as well as a special version of it, where a large stepsize condition is imposed. Expand
An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization
TLDR
A non-accelerated derivative-free algorithm with a complexity bound similar to the stochastic-gradient-based algorithm, that is, the authors' bound does not have any dimension-dependent factor except logarithmic. Expand
An Accelerated Directional Derivative Method for Smooth Stochastic Convex Optimization
TLDR
This paper proposes a non-accelerated and an accelerated directional derivative method which has a complexity bound which is similar to the gradient-based algorithm, that is, without any dimension-dependent factor. Expand
Stochastic Variance Reduction Methods for Saddle-Point Problems
TLDR
Convex-concave saddle-point problems where the objective functions may be split in many components are considered, and recent stochastic variance reduction methods are extended to provide the first large-scale linearly convergent algorithms. Expand
...
1
2
3
4
...