SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives


In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser. Unlike SDCA, SAGA supports non-strongly convex problems directly, and is adap-tive to any inherent strong convexity of the problem. We give experimental results showing the effectiveness of our method.

Extracted Key Phrases

2 Figures and Tables

Showing 1-10 of 12 references

New Optimization Methods for Machine Learning PhD thesis, (draft under examination) Australian National University

  • Aaron Defazio
  • 2014
4 Excerpts
Showing 1-10 of 197 extracted citations
Citations per Year

273 Citations

Semantic Scholar estimates that this publication has received between 226 and 336 citations based on the available data.

See our FAQ for additional information.