A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets

@inproceedings{Roux2012ASG,
  title={A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets},
  author={Nicolas Le Roux and Mark W. Schmidt and Francis R. Bach},
  booktitle={NIPS},
  year={2012}
}
We propose a new stochastic gradient method for optimizing the sum of a finite set of smooth functions, where the sum is strongly convex. While standard stochastic gradient methods converge at sublinear rates for this problem, the proposed method incorporates a memory of previous gradient values in order to achieve a linear convergence rate. In a machine learning context, numerical experiments indicate that the new algorithm can dramatically outperform standard algorithms, both in terms of… CONTINUE READING
Highly Influential
This paper has highly influenced 52 other papers. REVIEW HIGHLY INFLUENTIAL CITATIONS
Highly Cited
This paper has 403 citations. REVIEW CITATIONS

From This Paper

Figures, tables, results, connections, and topics extracted from this paper.
257 Extracted Citations
32 Extracted References
Similar Papers

Citing Papers

Publications influenced by this paper.
Showing 1-10 of 257 extracted citations

403 Citations

050100201320142015201620172018
Citations per Year
Semantic Scholar estimates that this publication has 403 citations based on the available data.

See our FAQ for additional information.

Referenced Papers

Publications referenced by this paper.
Showing 1-10 of 32 references

Introductory lectures on convex optimization: A basic course

  • Y. Nesterov
  • 2004
Highly Influential
5 Excerpts

Stochastic approximation and recursive algorithms and applications

  • H. J. Kushner, G. Yin
  • Springer-Verlag, Second edition,
  • 2003
Highly Influential
1 Excerpt

Problem complexity and method efficiency in optimization

  • A. Nemirovski, D. B. Yudin
  • 1983
Highly Influential
3 Excerpts

Similar Papers

Loading similar papers…