# The exact information-based complexity of smooth convex minimization

@article{Drori2017TheEI, title={The exact information-based complexity of smooth convex minimization}, author={Yoel Drori}, journal={J. Complex.}, year={2017}, volume={39}, pages={1-16} }

We obtain a new lower bound on the information-based complexity of first-order minimization of smooth and convex functions. We show that the bound matches the worst-case performance of the recently introduced Optimized Gradient Method, thereby establishing that the bound is tight and can be realized by an efficient algorithm. The proof is based on a novel construction technique of smooth and convex functions.

#### Topics from this paper

#### 38 Citations

Optimizing the Efficiency of First-Order Methods for Decreasing the Gradient of Smooth Convex Functions

- Computer Science, Mathematics
- J. Optim. Theory Appl.
- 2021

This paper optimizes the step coefficients of first-order methods for smooth convex minimization in terms of the worst-case convergence bound of the decrease in the gradient norm, and illustrates that the proposed method has a computationally efficient form that is similar to the optimized gradient method. Expand

Better Worst-Case Complexity Analysis of the Block Coordinate Descent Method for Large Scale Machine Learning

- Computer Science
- 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)
- 2017

A new lower bound is obtained, which is 16p3 times smaller than the best known on the information-based complexity of BCD method, by using an effective technique called Performance Estimation Problem (PEP) approach for analyzing the performance of first-order black box optimization methods. Expand

Exact Worst-Case Performance of First-Order Methods for Composite Convex Optimization

- Mathematics, Computer Science
- SIAM J. Optim.
- 2017

A new analytical worst-case guarantee is presented for the proximal point algorithm that is twice better than previously known, and the standard worst- case guarantee for the conditional gradient method is improved by more than a factor of two. Expand

On the Properties of Convex Functions over Open Sets

- Mathematics
- 2018

We consider the class of smooth convex functions defined over an open convex set. We show that this class is essentially different than the class of smooth convex functions defined over the entire… Expand

Efficient first-order methods for convex minimization: a constructive approach

- Mathematics, Computer Science
- Math. Program.
- 2020

We describe a novel constructive technique for devising efficient first-order methods for a wide range of large-scale convex minimization settings, including smooth, non-smooth, and strongly convex… Expand

Exact Worst-Case Convergence Rates of the Proximal Gradient Method for Composite Convex Minimization

- Mathematics, Computer Science
- J. Optim. Theory Appl.
- 2018

We study the worst-case convergence rates of the proximal gradient method for minimizing the sum of a smooth strongly convex function and a non-smooth convex function, whose proximal operator is… Expand

Accelerated Algorithms for Smooth Convex-Concave Minimax Problems with O(1/k^2) Rate on Squared Gradient Norm

- Computer Science
- ICML
- 2021

This work presents algorithms with accelerated O(1/k) last-iterate rates, faster than the existing O( 1/ k) or slower rates for extragradient, Popov, and gradient descent with anchoring, and establishes optimality of the O(2/k), through a matching lower bound. Expand

On the Convergence Analysis of the Optimized Gradient Method

- Mathematics, Medicine
- J. Optim. Theory Appl.
- 2017

This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method, including the interesting fact that the optimization method has two types of worst-case functions: a piecewise affine-quadratic function and a quadratic function. Expand

Convex interpolation and performance estimation of first-order methods for convex optimization

- Computer Science, Mathematics
- 2017

This thesis forms a generic optimization problem looking for the worst-case scenarios of first-order methods in convex optimization, and transforms PEPs into solvable finite-dimensional semidefinite programs, from which one obtains worst- Case guarantees and worst- case functions, along with the corresponding explicit proofs. Expand

Universal gradient descent

- Mathematics
- 2017

In this book we collect many different and useful facts around gradient descent method. First of all we consider gradient descent with inexact oracle. We build a general model of optimized function… Expand

#### References

SHOWING 1-10 OF 18 REFERENCES

Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization

- Computer Science, Mathematics
- IEEE Transactions on Information Theory
- 2012

A new notion of discrepancy between functions is introduced, and used to reduce problems of stochastic convex optimization to statistical parameter estimation, which can be lower bounded using information-theoretic methods. Expand

On lower complexity bounds for large-scale smooth convex optimization

- Computer Science, Mathematics
- J. Complex.
- 2015

We derive lower bounds on the black-box oracle complexity of large-scale smooth convex minimization problems, with emphasis on minimizing smooth (with Holder continuous, with a given exponent and… Expand

Performance of first-order methods for smooth convex minimization: a novel approach

- Computer Science, Mathematics
- Math. Program.
- 2014

A novel approach for analyzing the worst-case performance of first-order black-box optimization methods, which focuses on smooth unconstrained convex minimization over the Euclidean space and derives a new and tight analytical bound on its performance. Expand

Information-based complexity

- Computer Science, Medicine
- Nature
- 1987

Information-based complexity seeks to develop general results about the intrinsic difficulty of solving problems where available information is partial or approximate and to apply these results to… Expand

Information-Based Complexity, Feedback and Dynamics in Convex Programming

- Computer Science, Mathematics
- IEEE Transactions on Information Theory
- 2011

The present work connects the intuitive notions of “information” in optimization, experimental design, estimation, and active learning to the quantitative notion of Shannon information and shows that optimization algorithms often obey the law of diminishing returns. Expand

Smooth strongly convex interpolation and exact worst-case performance of first-order methods

- Mathematics, Computer Science
- Math. Program.
- 2017

We show that the exact worst-case performance of fixed-step first-order methods for unconstrained optimization of smooth (possibly strongly) convex functions can be obtained by solving convex… Expand

An optimal variant of Kelley’s cutting-plane method

- Mathematics, Computer Science
- Math. Program.
- 2016

A new variant of Kelley’s cutting-plane method for minimizing a nonsmooth convex Lipschitz-continuous function over the Euclidean space is proposed and it is proved that it attains the optimal rate of convergence for this class of problems. Expand

Introductory Lectures on Convex Optimization - A Basic Course

- Computer Science
- Applied Optimization
- 2004

It was in the middle of the 1980s, when the seminal paper by Kar markar opened a new epoch in nonlinear optimization, and it became more and more common that the new methods were provided with a complexity analysis, which was considered a better justification of their efficiency than computational experiments. Expand

Information-based complexity of linear operator equations

- Computer Science, Mathematics
- J. Complex.
- 1992

The problem is to evaluate the complexity of solving the equation to a given accuracy, i.e., given a class 2l of instances, to point out the best possible upper bound of the number of calls of the oracle sufficient to find an e-solution to each of the instances. Expand

On Complexity of Stochastic Programming Problems

- Computer Science
- 2005

It is argued that two-stage (linear) stochastic programming problems with recourse can be solved with a reasonable accuracy by using Monte Carlo sampling techniques, while multistage Stochastic programs, in general, are intractable. Expand