# Performance of first-order methods for smooth convex minimization: a novel approach

@article{Drori2012PerformanceOF,
title={Performance of first-order methods for smooth convex minimization: a novel approach},
author={Yoel Drori and Marc Teboulle},
journal={Mathematical Programming},
year={2012},
volume={145},
pages={451-482}
}
• Published 14 June 2012
• Computer Science
• Mathematical Programming
We introduce a novel approach for analyzing the worst-case performance of first-order black-box optimization methods. We focus on smooth unconstrained convex minimization over the Euclidean space. Our approach relies on the observation that by definition, the worst-case behavior of a black-box optimization method is by itself an optimization problem, which we call the performance estimation problem (PEP). We formulate and analyze the PEP for two classes of first-order algorithms. We first apply…
189 Citations
• Computer Science
Mathematical Programming
• 2019
We describe a novel constructive technique for devising efficient first-order methods for a wide range of large-scale convex minimization settings, including smooth, non-smooth, and strongly convex
This thesis forms a generic optimization problem looking for the worst-case scenarios of first-order methods in convex optimization, and transforms PEPs into solvable finite-dimensional semidefinite programs, from which one obtains worst- Case guarantees and worst- case functions, along with the corresponding explicit proofs.
• Computer Science, Mathematics
• 2022
This study analyzes worst-case convergence guarantees of ﬁrst-order optimization methods over a function class extending that of smooth and convex functions, and shows how the analysis can be leveraged to obtain convergence guarantees over more complex classes of functions.
• Computer Science
SIAM J. Optim.
• 2017
A new analytical worst-case guarantee is presented for the proximal point algorithm that is twice better than previously known, and the standard worst- case guarantee for the conditional gradient method is improved by more than a factor of two.
• Mathematics, Computer Science
Math. Program.
• 2017
We show that the exact worst-case performance of fixed-step first-order methods for unconstrained optimization of smooth (possibly strongly) convex functions can be obtained by solving convex
• Mathematics, Computer Science
Mathematical Programming
• 2016
We show that the exact worst-case performance of fixed-step first-order methods for unconstrained optimization of smooth (possibly strongly) convex functions can be obtained by solving convex
• Computer Science, Mathematics
ArXiv
• 2021
We present an optimal gradient method for smooth (possibly strongly) convex optimization. The method is optimal in the sense that its worst-case bound exactly matches the lower bound on the oracle
• Computer Science
Journal of Optimization Theory and Applications
• 2021
The SOS framework provides a promising new approach for certifying improved rates of convergence by means of higher-order SOS certificates and is used to derive new results for noisy gradient descent with inexact line search methods.
• Computer Science
2017 IEEE 56th Annual Conference on Decision and Control (CDC)
• 2017
A Matlab toolbox that automatically computes tight worst-case performance guarantees for a broad class of first-order methods for convex optimization, which includes those performing explicit, projected, proximal, conditional and inexact (sub)gradient steps.
• Computer Science
J. Optim. Theory Appl.
• 2018
We study the worst-case convergence rates of the proximal gradient method for minimizing the sum of a smooth strongly convex function and a non-smooth convex function, whose proximal operator is

## References

SHOWING 1-10 OF 25 REFERENCES

• Computer Science, Mathematics
• 2011
First-order methods suitable for solving primal-dual convex (smooth and/or nonsmooth) minimization reformulations of the cone programming problem are discussed, and a variant of Nesterov's optimal method is proposed which has outperformed the latter one in computational experiments.
• Computer Science, Mathematics
Math. Program.
• 2011
This paper discusses first-order methods suitable for solving primal-dual convex and nonsmooth minimization reformulations of the cone programming problem, and proposes a variant of Nesterov’s optimal method which has outperformed the latter one in the authors' computational experiments.
• Computer Science
Math. Program.
• 2013
The first order algorithm for convex programming described by Nesterov in his book is modified, and an adaptive procedure for estimating a strong convexity constant for the function is developed.
• Computer Science
MPS-SIAM series on optimization
• 2001
The authors present the basic theory of state-of-the-art polynomial time interior point methods for linear, conic quadratic, and semidefinite programming as well as their numerous applications in engineering.
• Computer Science, Mathematics
SIAM J. Imaging Sci.
• 2009
A new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically.
A comprehensive introduction to the subject, this book shows in detail how such problems can be solved in many different fields, and proves the vanishing of a determinant whose ...
• A. Beck
• Mathematics
SIAM J. Optim.
• 2007
This work constructs a specially devised semidefinite relaxation (SDR) and dual for the QMP problem and shows that under some mild conditions strong duality holds for QMP problems with at most $r$ constraints.
Two modifications to Nesterov's algorithms for minimizing convex functions in relative scale are proposed, based on a bisection technique and leads to improved theoretical iteration complexity, and the second is a heuristic for avoiding restarting behavior.
• Mathematics
• 2000
Let H be a real Hilbert space and Φ:H ↦ R a continuously differentiable function, whose gradient is Lipschitz continuous on bounded sets. We study the nonlinear dissipative dynamical system: \${\ddot