Katyusha: The First Direct Acceleration of Stochastic Gradient Methods

@article{AllenZhu2017KatyushaTF,
  title={Katyusha: The First Direct Acceleration of Stochastic Gradient Methods},
  author={Zeyuan Allen-Zhu},
  journal={Journal of Machine Learning Research},
  year={2017},
  volume={18},
  pages={221:1-221:51}
}
Nesterov’s momentum trick is famously known for accelerating gradient descent, and has been proven useful in building fast iterative algorithms. However, in the stochastic setting, counterexamples exist and prevent Nesterov’s momentum from providing similar acceleration, even if the underlying problem is convex and finite-sum. We introduce Katyusha, a direct, primal-only stochastic gradient method to fix this issue. In convex finite-sum stochastic optimization, Katyusha has an optimal… CONTINUE READING

From This Paper

Figures and tables from this paper.

Similar Papers

Loading similar papers…