Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods

@article{Vanli2018GlobalCR,
  title={Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods},
  author={N. Denizcan Vanli and Mert G{\"u}rb{\"u}zbalaban and Asuman E. Ozdaglar},
  journal={SIAM J. Optim.},
  year={2018},
  volume={28},
  pages={1282-1300}
}
We focus on the problem of minimizing the sum of smooth component functions (where the sum is strongly convex) and a non-smooth convex function, which arises in regularized empirical risk minimization in machine learning and distributed constrained optimization in wireless sensor networks and smart grids. We consider solving this problem using the proximal incremental aggregated gradient (PIAG) method, which at each iteration moves along an aggregated gradient (formed by incrementally updating… Expand
A Stronger Convergence Result on the Proximal Incremental Aggregated Gradient Method
We study the convergence rate of the proximal incremental aggregated gradient (PIAG) method for minimizing the sum of a large number of smooth component functions (where the sum is strongly convex)Expand
General Proximal Incremental Aggregated Gradient Algorithms: Better and Novel Results under General Scheme
TLDR
The novel results presented in this paper, which have not appeared in previous literature, include: a general scheme, nonconvex analysis, the sublinear convergence rates of the function values, much larger stepsizes that guarantee the convergence, the convergence when noise exists. Expand
General Proximal Incremental Aggregated Gradient Algorithms: Better and Novel Results under General Scheme
The incremental aggregated gradient algorithm is popular in network optimization and machine learning research. However, the current convergence results require the objective function to be stronglyExpand
General Proximal Incremental Aggregated Gradient Algorithms: Better and Novel Results under General Scheme
The incremental aggregated gradient algorithm is popular in network optimization and machine learning research. However, the current convergence results require the objective function to be stronglyExpand
Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions
TLDR
A unified algorithmic framework for minimizing the sum of smooth convex component functions and a proper closed convex regularization function that is possibly non-smooth and extendedvalued with an additional abstract feasible set whose geometry can be captured by using the domain of a Legendre function. Expand
Proximal-like incremental aggregated gradient method with Bregman distance in weakly convex optimization problems
  • Zehui Jia, Jieru Huang, Xingju Cai
  • Computer Science, Mathematics
  • J. Glob. Optim.
  • 2021
TLDR
It is proved that the limit point of the sequence generated by the PLIAG method is the critical point of a weakly convex problems, which means that the generated sequence converges globally to a critical points of the problem. Expand
A Simple Proof for the Iteration Complexity of the Proximal Gradient Algorithm
We study the problem of minimizing the sum of a smooth strongly convex function and a non-smooth convex function. We consider solving this problem using the proximal gradient (PG) method, which atExpand
Linear Convergence of the Proximal Incremental Aggregated Gradient Method under Quadratic Growth Condition
Under the strongly convex assumption, several recent works studied the global linear convergence rate of the proximal incremental aggregated gradient (PIAG) method for minimizing the sum of a largeExpand
Nonconvex Proximal Incremental Aggregated Gradient Method with Linear Convergence
TLDR
This paper shows that the generated iterative sequence globally converges to the stationary point set, and gives an explicit computable stepsize threshold to guarantee that both the objective value and iterative sequences are R-linearly convergent. Expand
Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate
TLDR
A Double Incremental Aggregated Gradient method that computes the gradient of only one function at each iteration, which is chosen based on a cyclic scheme, and uses the aggregated average gradient of all the functions to approximate the full gradient. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 28 REFERENCES
On the Convergence Rate of Incremental Aggregated Gradient Algorithms
TLDR
It is shown that this deterministic incremental aggregated gradient method has global linear convergence and the convergence rate is characterized, and an aggregated method with momentum is considered and its linear convergence is demonstrated. Expand
Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
  • J. Mairal
  • Computer Science, Mathematics
  • SIAM J. Optim.
  • 2015
TLDR
This work proposes an incremental majorization-minimization scheme for minimizing a large sum of continuous functions, a problem of utmost importance in machine learning, and presents convergence guarantees for nonconvex and convex optimization when the upper bounds approximate the objective up to a smooth error. Expand
Incremental Aggregated Proximal and Augmented Lagrangian Algorithms
TLDR
Dual versions of incremental proximal algorithms are considered, which are incremental augmented Lagrangian methods for separable equality-constrained optimization problems and a closely related linearly convergent method for minimization of large dierentiable sums subject to an orthant constraint is proposed. Expand
A delayed proximal gradient method with linear convergence rate
TLDR
This paper derives an explicit expression that quantifies how the convergence rate depends on objective function properties and algorithm parameters such as step-size and the maximum delay, and reveals the trade-off between convergence speed and residual error. Expand
A Convergent Incremental Gradient Method with a Constant Step Size
TLDR
An incremental aggregated gradient method for minimizing a sum of continuously differentiable functions and it is shown that the method visits infinitely often regions in which the gradient is small. Expand
Incremental subgradient methods for nondifferentiable optimization
  • A. Geary, D. Bertsekas
  • Mathematics
  • Proceedings of the 38th IEEE Conference on Decision and Control (Cat. No.99CH36304)
  • 1999
We propose a class of subgradient methods for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context fromExpand
Incremental Gradient Algorithms with Stepsizes Bounded Away from Zero
  • M. Solodov
  • Mathematics, Computer Science
  • Comput. Optim. Appl.
  • 1998
TLDR
The first convergence results of any kind for this computationally important case are derived and it is shown that a certain ε-approximate solution can be obtained and the linear dependence of ε on the stepsize limit is established. Expand
Incrementally Updated Gradient Methods for Constrained and Regularized Optimization
  • P. Tseng, S. Yun
  • Mathematics, Computer Science
  • J. Optim. Theory Appl.
  • 2014
TLDR
Every cluster point of the iterates generated by the method is a stationary point, and if in addition a local Lipschitz error bound assumption holds, then themethod is linearly convergent. Expand
A coordinate gradient descent method for nonsmooth separable minimization
TLDR
A (block) coordinate gradient descent method for solving this class of nonsmooth separable problems and establishes global convergence and, under a local Lipschitzian error bound assumption, linear convergence for this method. Expand
Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey
TLDR
A unified algorithmic framework is introduced for incremental methods for minimizing a sum P m=1 fi(x) consisting of a large number of convex component functions fi, including the advantages offered by randomization in the selection of components. Expand
...
1
2
3
...