# Adaptive Catalyst for smooth convex optimization

@inproceedings{Ivanova2019AdaptiveCF, title={Adaptive Catalyst for smooth convex optimization}, author={Anastasiya Ivanova and Dmitry Pasechnyuk and Dmitry Grishchenko and Egor Shulgin and Alexander V. Gasnikov}, year={2019} }

In 2015 there appears a universal framework Catalyst that allows to accelerate almost arbitrary non-accelerated deterministic and randomized algorithms for smooth convex optimization problems Lin et al. (2015). This technique finds a lot of applications in Machine Learning due to the possibility to deal with sum-type target functions. The significant part of the Catalyst approach is accelerated proximal outer gradient method. This method used as an envelope for non-accelerated inner algorithm… Expand

#### 11 Citations

Accelerated meta-algorithm for convex optimization

- Mathematics, Computer Science
- 2020

The proposed meta-algorithm is more general than the ones in the literature and allows to obtain better convergence rates and practical performance in several settings and nearly optimal methods for minimizing smooth functions with Lipschitz derivatives of an arbitrary order. Expand

Oracle Complexity Separation in Convex Optimization

- Mathematics, Computer Science
- 2020

This work proposes a generic framework to combine optimal algorithms for different types of oracles in order to achieve separate optimal oracle complexity for each block, i.e. for eachBlock the corresponding oracle is called the optimal number of times for a given accuracy. Expand

Lower bounds for conditional gradient type methods for minimizing smooth strongly convex functions

- Mathematics
- 2020

In this paper, we consider conditional gradient methods. These are methods that use a linear minimization oracle, which, for a given vector $p \in \mathbb{R}^n$, computes the solution of the… Expand

Near-Optimal Hyperfast Second-Order Method for Convex Optimization

- Mathematics
- 2020

In this paper, we present a new Hyperfast Second-Order Method with convergence rate $O(N^{-5})$ up to a logarithmic factor for the convex function with Lipshitz the third derivative. This method… Expand

Accelerated gradient sliding and variance reduction.

- Mathematics
- 2019

We consider sum-type strongly convex optimization problem (first term) with smooth convex not proximal friendly composite (second term). We show that the complexity of this problem can be split into… Expand

On the Computational Efficiency of Catalyst Accelerated Coordinate Descent

- Computer Science, Mathematics
- MOTOR
- 2021

A proximally accelerated coordinate descent method is proposed that achieves the efficient algorithmic complexity of iteration and allows taking advantage of the data sparseness and demonstrates a faster convergence in comparison with standard methods. Expand

Accelerated Proximal Envelopes: Application to the Coordinate Descent Method

- Mathematics
- 2021

Статья посвящена одному частному случаю применения универсальных ускоренных проксимальных оболочек для получения вычислительно эффективных ускоренных вариантов методов, использующихся для решения… Expand

Solving smooth min-min and min-max problems by mixed oracle algorithms

- Mathematics
- 2021

In this paper we consider two types of problems which have some similarity in their structure, namely, min-min problems and minmax saddle-point problems. Our approach is based on considering the… Expand

Accelerated Gradient Sliding for Minimizing a Sum of Functions

- Mathematics
- 2020

Abstract We propose a new way of justifying the accelerated gradient sliding of G. Lan, which allows one to extend the sliding technique to a combination of an accelerated gradient method with an… Expand

Contracting Proximal Methods for Smooth Convex Optimization

- Computer Science, Mathematics
- SIAM J. Optim.
- 2020

This paper proposes new accelerated methods for smooth Convex Optimization, called Contracting Proximal Methods, and provides global convergence analysis for a general scheme admitting inexactness in solving the auxiliary subproblem. Expand

#### References

SHOWING 1-10 OF 36 REFERENCES

Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice

- Computer Science, Mathematics
- J. Mach. Learn. Res.
- 2017

This paper gives practical guidelines to use Catalyst and presents a comprehensive theoretical analysis of its global complexity, showing that Catalyst applies to a large class of algorithms, including gradient descent, block coordinate descent, incremental algorithms such as SAG, SAGA, SDCA, SVRG, Finito/MISO and their proximal variants. Expand

Catalyst Acceleration for Gradient-Based Non-Convex Optimization

- Mathematics
- 2017

We introduce a generic scheme to solve nonconvex optimization problems using gradient-based algorithms originally designed for minimizing convex functions. When the objective is convex, the proposed… Expand

A Universal Catalyst for First-Order Optimization

- Computer Science, Mathematics
- NIPS
- 2015

This work introduces a generic scheme for accelerating first-order optimization methods in the sense of Nesterov, which builds upon a new analysis of the accelerated proximal point algorithm, and shows that acceleration is useful in practice, especially for ill-conditioned problems where the authors measure significant improvements. Expand

Accelerated Alternating Minimization

- Computer Science, Mathematics
- ArXiv
- 2019

This work introduces an accelerated alternating minimization method with a $1/k^2 convergence rate, where $k$ is the iteration counter and applies it to the entropy regularized optimal transport problem and shows experimentally, that it outperforms Sinkhorn's algorithm. Expand

Accelerating Rescaled Gradient Descent: Fast Optimization of Smooth Functions

- Computer Science, Mathematics
- NeurIPS
- 2019

A new first-order algorithm, called rescaled gradient descent (RGD), is introduced, and it is shown that RGD achieves a faster convergence rate than gradient descent provided the function is strongly smooth -- a natural generalization of the standard smoothness assumption on the objective function. Expand

Reachability of Optimal Convergence Rate Estimates for High-Order Numerical Convex Optimization Methods

- 2019

The Monteiro–Svaiter accelerated hybrid proximal extragradient method (2013) with one step of Newton’s method used at every iteration for the approximate solution of an auxiliary problem is… Expand

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and Its Implications to Second-Order Methods

- Mathematics, Computer Science
- SIAM J. Optim.
- 2013

This paper presents an accelerated variant of the hybrid proximal extragradient (H PE) method for convex optimization, referred to as the accelerated HPE (A-HPE) framework, as well as a special version of it, where a large stepsize condition is imposed. Expand

An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization

- Mathematics, Computer Science
- ArXiv
- 2018

A non-accelerated derivative-free algorithm with a complexity bound similar to the stochastic-gradient-based algorithm, that is, the authors' bound does not have any dimension-dependent factor except logarithmic. Expand

An Accelerated Directional Derivative Method for Smooth Stochastic Convex Optimization

- Computer Science, Mathematics
- Eur. J. Oper. Res.
- 2021

This paper proposes a non-accelerated and an accelerated directional derivative method which has a complexity bound which is similar to the gradient-based algorithm, that is, without any dimension-dependent factor. Expand

Stochastic Variance Reduction Methods for Saddle-Point Problems

- Computer Science, Mathematics
- NIPS
- 2016

Convex-concave saddle-point problems where the objective functions may be split in many components are considered, and recent stochastic variance reduction methods are extended to provide the first large-scale linearly convergent algorithms. Expand