The exact information-based complexity of smooth convex minimization

@article{Drori2017TheEI,
  title={The exact information-based complexity of smooth convex minimization},
  author={Yoel Drori},
  journal={J. Complex.},
  year={2017},
  volume={39},
  pages={1-16}
}
  • Y. Drori
  • Published 2017
  • Mathematics, Computer Science
  • J. Complex.
We obtain a new lower bound on the information-based complexity of first-order minimization of smooth and convex functions. We show that the bound matches the worst-case performance of the recently introduced Optimized Gradient Method, thereby establishing that the bound is tight and can be realized by an efficient algorithm. The proof is based on a novel construction technique of smooth and convex functions. 
Optimizing the Efficiency of First-Order Methods for Decreasing the Gradient of Smooth Convex Functions
TLDR
This paper optimizes the step coefficients of first-order methods for smooth convex minimization in terms of the worst-case convergence bound of the decrease in the gradient norm, and illustrates that the proposed method has a computationally efficient form that is similar to the optimized gradient method. Expand
Better Worst-Case Complexity Analysis of the Block Coordinate Descent Method for Large Scale Machine Learning
  • Ziqiang Shi, R. Liu
  • Computer Science
  • 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)
  • 2017
TLDR
A new lower bound is obtained, which is 16p3 times smaller than the best known on the information-based complexity of BCD method, by using an effective technique called Performance Estimation Problem (PEP) approach for analyzing the performance of first-order black box optimization methods. Expand
Exact Worst-Case Performance of First-Order Methods for Composite Convex Optimization
TLDR
A new analytical worst-case guarantee is presented for the proximal point algorithm that is twice better than previously known, and the standard worst- case guarantee for the conditional gradient method is improved by more than a factor of two. Expand
On the Properties of Convex Functions over Open Sets
We consider the class of smooth convex functions defined over an open convex set. We show that this class is essentially different than the class of smooth convex functions defined over the entireExpand
Efficient first-order methods for convex minimization: a constructive approach
We describe a novel constructive technique for devising efficient first-order methods for a wide range of large-scale convex minimization settings, including smooth, non-smooth, and strongly convexExpand
Exact Worst-Case Convergence Rates of the Proximal Gradient Method for Composite Convex Minimization
We study the worst-case convergence rates of the proximal gradient method for minimizing the sum of a smooth strongly convex function and a non-smooth convex function, whose proximal operator isExpand
Accelerated Algorithms for Smooth Convex-Concave Minimax Problems with O(1/k^2) Rate on Squared Gradient Norm
TLDR
This work presents algorithms with accelerated O(1/k) last-iterate rates, faster than the existing O( 1/ k) or slower rates for extragradient, Popov, and gradient descent with anchoring, and establishes optimality of the O(2/k), through a matching lower bound. Expand
On the Convergence Analysis of the Optimized Gradient Method
TLDR
This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method, including the interesting fact that the optimization method has two types of worst-case functions: a piecewise affine-quadratic function and a quadratic function. Expand
Convex interpolation and performance estimation of first-order methods for convex optimization
TLDR
This thesis forms a generic optimization problem looking for the worst-case scenarios of first-order methods in convex optimization, and transforms PEPs into solvable finite-dimensional semidefinite programs, from which one obtains worst- Case guarantees and worst- case functions, along with the corresponding explicit proofs. Expand
Universal gradient descent
In this book we collect many different and useful facts around gradient descent method. First of all we consider gradient descent with inexact oracle. We build a general model of optimized functionExpand
...
1
2
3
4
...

References

SHOWING 1-10 OF 18 REFERENCES
Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization
TLDR
A new notion of discrepancy between functions is introduced, and used to reduce problems of stochastic convex optimization to statistical parameter estimation, which can be lower bounded using information-theoretic methods. Expand
On lower complexity bounds for large-scale smooth convex optimization
We derive lower bounds on the black-box oracle complexity of large-scale smooth convex minimization problems, with emphasis on minimizing smooth (with Holder continuous, with a given exponent andExpand
Performance of first-order methods for smooth convex minimization: a novel approach
TLDR
A novel approach for analyzing the worst-case performance of first-order black-box optimization methods, which focuses on smooth unconstrained convex minimization over the Euclidean space and derives a new and tight analytical bound on its performance. Expand
Information-based complexity
Information-based complexity seeks to develop general results about the intrinsic difficulty of solving problems where available information is partial or approximate and to apply these results toExpand
Information-Based Complexity, Feedback and Dynamics in Convex Programming
TLDR
The present work connects the intuitive notions of “information” in optimization, experimental design, estimation, and active learning to the quantitative notion of Shannon information and shows that optimization algorithms often obey the law of diminishing returns. Expand
Smooth strongly convex interpolation and exact worst-case performance of first-order methods
We show that the exact worst-case performance of fixed-step first-order methods for unconstrained optimization of smooth (possibly strongly) convex functions can be obtained by solving convexExpand
An optimal variant of Kelley’s cutting-plane method
TLDR
A new variant of Kelley’s cutting-plane method for minimizing a nonsmooth convex Lipschitz-continuous function over the Euclidean space is proposed and it is proved that it attains the optimal rate of convergence for this class of problems. Expand
Introductory Lectures on Convex Optimization - A Basic Course
TLDR
It was in the middle of the 1980s, when the seminal paper by Kar markar opened a new epoch in nonlinear optimization, and it became more and more common that the new methods were provided with a complexity analysis, which was considered a better justification of their efficiency than computational experiments. Expand
Information-based complexity of linear operator equations
TLDR
The problem is to evaluate the complexity of solving the equation to a given accuracy, i.e., given a class 2l of instances, to point out the best possible upper bound of the number of calls of the oracle sufficient to find an e-solution to each of the instances. Expand
On Complexity of Stochastic Programming Problems
TLDR
It is argued that two-stage (linear) stochastic programming problems with recourse can be solved with a reasonable accuracy by using Monte Carlo sampling techniques, while multistage Stochastic programs, in general, are intractable. Expand
...
1
2
...