• Corpus ID: 238407711

Embedding a Heavy-Ball type of Momentum into the Estimating Sequences

@inproceedings{Dosti2020EmbeddingAH,
  title={Embedding a Heavy-Ball type of Momentum into the Estimating Sequences},
  author={Endrit Dosti and Sergiy A. Vorobyov and Themistoklis Charalambous},
  year={2020}
}
We present a new accelerated gradient-based method for solving smooth unconstrained optimization problems. The goal is to embed a heavy-ball type of momentum into the Fast Gradient Method (FGM). For this purpose, we construct a generalization of the estimating sequences, which allows for encoding any form of information about the cost function that can aid in further accelerating the minimization process. In the black box framework, we propose a construction for the generalized estimating… 
1 Citations

Figures from this paper

A New Class of Composite Objective Multi-step Estimating-sequence Techniques (COMET)
TLDR
This work proposes an efficient line search strategy for COMET, and proves that it enjoys an accelerated convergence rate, and introduces a new class of estimating functions, which are obtained by utilizing a tight lower bound on the objective function.

References

SHOWING 1-10 OF 61 REFERENCES
An accelerated gradient method for trace norm minimization
TLDR
This paper exploits the special structure of the trace norm, based on which it is proposed an extended gradient algorithm that converges as O(1/k) and proposes an accelerated gradient algorithm, which achieves the optimal convergence rate of O( 1/k2) for smooth problems.
Revisit of Estimate Sequence for Accelerated Gradient Methods
TLDR
This paper considers the so-called estimate sequence (ES), a useful analysis tool for establishing the convergence of AGM, and develops a generalized ES to support Lipschitz continuous gradient on any norm, given the importance of considering non-Euclidian norms in optimization.
Inexact accelerated high-order proximal-point methods
TLDR
A new framework of Bi-Level Unconstrained Minimization (BLUM) for development of accelerated methods in Convex Programming, and presents new methods with the exact auxiliary search procedure, which have the rate of convergence O(k^{−(3p+1)/2}), where p ≥ 1 is the order of the proximal operator.
A Generalized Accelerated Composite Gradient Method: Uniting Nesterov's Fast Gradient Method and FISTA
TLDR
A Generalized Accelerated Composite Gradient Method that encompasses FGM and FISTA, along with their most popular variants, and shows that monotonicity has a stabilizing effect on convergence and challenge the notion present in the literature that, for strongly convex objectives, accelerated proximal schemes can be reduced to fixed momentum methods.
Performance of first-order methods for smooth convex minimization: a novel approach
TLDR
A novel approach for analyzing the worst-case performance of first-order black-box optimization methods, which focuses on smooth unconstrained convex minimization over the Euclidean space and derives a new and tight analytical bound on its performance.
An Accelerated Composite Gradient Method for Large-Scale Composite Objective Problems
TLDR
The augmented estimate sequence framework is developed, a relaxation of the estimate sequence that generates a conceptually simple gap sequence that is used to construct the Accelerated Composite Gradient Method (ACGM), a versatile first-order scheme applicable to any composite problem.
An Estimate Sequence for Geodesically Convex Optimization
We propose a Riemannian version of Nesterov’s Accelerated Gradient algorithm (RAGD), and show that for geodesically smooth and strongly convex problems, within a neighborhood of the minimizer whose
A variational perspective on accelerated methods in optimization
TLDR
A variational, continuous-time framework for understanding accelerated methods is proposed and a systematic methodology for converting accelerated higher-order methods from continuous time to discrete time is provided, which illuminates a class of dynamics that may be useful for designing better algorithms for optimization.
Gradient methods for minimizing composite functions
  • Y. Nesterov
  • Mathematics, Computer Science
    Math. Program.
  • 2013
In this paper we analyze several new methods for solving optimization problems with the objective function formed as a sum of two terms: one is smooth and given by a black-box oracle, and another is
Accelerating the cubic regularization of Newton’s method on convex problems
TLDR
An accelerated version of the cubic regularization of Newton’s method that converges for the same problem class with order, keeping the complexity of each iteration unchanged and arguing that for the second-order schemes, the class of non-degenerate problems is different from the standard class.
...
...