# First-Order Methods for Convex Optimization

@article{Dvurechensky2021FirstOrderMF, title={First-Order Methods for Convex Optimization}, author={Pavel E. Dvurechensky and Mathias Staudigl and Shimrit Shtern}, journal={EURO J. Comput. Optim.}, year={2021}, volume={9}, pages={100015} }

First-order methods for solving convex optimization problems have been at the forefront of mathematical optimization in the last 20 years. The rapid development of this important class of algorithms is motivated by the success stories reported in various applications, including most importantly machine learning, signal processing, imaging and control theory. First-order methods have the potential to provide low accuracy solutions at low computational complexity which makes them an attractive…

## 5 Citations

Hyperfast Second-Order Local Solvers for Efficient Statistically Preconditioned Distributed Optimization

- Mathematics
- 2021

Statistical preconditioning can be used to design fast methods for distributed large-scale empirical risk minimization problems, for strongly convex and smooth loss functions, allowing fewer…

Generalized Self-Concordant Analysis of Frank-Wolfe algorithms

- Mathematics
- 2020

Projection-free optimization via different variants of the Frank-Wolfe (FW) method has become one of the cornerstones in large scale optimization for machine learning and computational statistics.…

A New Randomized Primal-Dual Algorithm for Convex Optimization with Optimal Last Iterate Rates

- Mathematics
- 2020

We develop a novel unified randomized block-coordinate primal-dual algorithm to solve a class of nonsmooth constrained convex optimization problems, which covers different existing variants and model…

Stopping rules for accelerated gradient methods with additive noise in gradient

- Mathematics
- 2021

In this article, we investigate an accelerated first-order method, namely, the method of similar triangles, which is optimal in the class of convex (strongly convex) problems with a Lipschitz…

Recent Theoretical Advances in Non-Convex Optimization

- Mathematics, Computer ScienceArXiv
- 2020

An overview of recent theoretical results on global performance guarantees of optimization algorithms for non-convex optimization and a list of problems that can be solved efficiently to find the global minimizer by exploiting the structure of the problem as much as it is possible.

## References

SHOWING 1-10 OF 307 REFERENCES

First-order methods of smooth convex optimization with inexact oracle

- Mathematics, Computer ScienceMath. Program.
- 2014

It is demonstrated that the superiority of fast gradient methods over the classical ones is no longer absolute when an inexact oracle is used, and it is proved that, contrary to simple gradient schemes,fast gradient methods must necessarily suffer from error accumulation.

Stochastic first order methods in smooth convex optimization

- Mathematics
- 2011

In this paper, we are interested in the development of efficient first-order methods for convex optimization problems in the simultaneous presence of smoothness of the objective function and…

A Universal Catalyst for First-Order Optimization

- Computer Science, MathematicsNIPS
- 2015

This work introduces a generic scheme for accelerating first-order optimization methods in the sense of Nesterov, which builds upon a new analysis of the accelerated proximal point algorithm, and shows that acceleration is useful in practice, especially for ill-conditioned problems where the authors measure significant improvements.

A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization

- Mathematics, Computer ScienceSIAM J. Optim.
- 2018

This work proposes a new first-order primal-dual optimization framework for a convex optimization template with broad applications and demonstrates relations with the augmented Lagrangian method and shows how to exploit the strongly convex objectives with rigorous convergence rate guarantees.

Primal–dual accelerated gradient methods with small-dimensional relaxation oracle

- Mathematics, Computer Science
- 2018

It is demonstrated how in practice one can efficiently use the combination of line-search and primal-duality by considering a convex optimization problem with a simple structure (for example, linearly constrained).

Randomized first order algorithms with applications to ℓ1-minimization

- Mathematics, Computer ScienceMath. Program.
- 2013

It is demonstrated that when seeking for medium-accuracy solutions of large-scale ℓ1 minimization problems, the proposed randomized first-order algorithms outperform significantly (and progressively as the sizes of the problem grow) the state-of-the art deterministic methods.

Dual subgradient algorithms for large-scale nonsmooth learning problems

- Computer Science, MathematicsMath. Program.
- 2014

This work proposes a novel approach to solving nonsmooth optimization problems arising in learning applications where Fenchel-type representation of the objective function is available and requires the problem domain to admit a Linear Optimization oracle—the ability to efficiently maximize a linear form on the domain of the primal problem.

Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets

- Mathematics, Computer ScienceICML
- 2015

This paper proves that the vanila FW method converges at a rate of 1/t2, and shows that various balls induced by lp norms, Schatten norms and group norms are strongly convex on one hand and on the other hand, linear optimization over these sets is straightforward and admits a closed-form solution.

Stochastic intermediate gradient method for convex optimization problems

- Mathematics
- 2016

New first-order methods are introduced for solving convex optimization problems from a fairly broad class. For composite optimization problems with an inexact stochastic oracle, a stochastic…

Exactness, inexactness and stochasticity in first-order methods for large-scale convex optimization

- Mathematics
- 2013

The goal of this thesis is to extend the analysis and the scope of first-order methods of smooth convex optimization. We consider three challenging difficulties: inexact first-order information, lack…