Corpus ID: 59316619

Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization

@inproceedings{Zhuang2019SurrogateLF,
  title={Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization},
  author={Zhenxun Zhuang and Ashok Cutkosky and F. Orabona},
  booktitle={ICML},
  year={2019}
}
  • Zhenxun Zhuang, Ashok Cutkosky, F. Orabona
  • Published in ICML 2019
  • Computer Science, Mathematics
  • Stochastic Gradient Descent (SGD) has played a central role in machine learning. However, it requires a carefully hand-picked stepsize for fast convergence, which is notoriously tedious and time-consuming to tune. Over the last several years, a plethora of adaptive gradient-based algorithms have emerged to ameliorate this problem. They have proved efficient in reducing the labor of tuning in practice, but many of them lack theoretic guarantees even in the convex setting. In this paper, we… CONTINUE READING
    1 Citations

    References

    SHOWING 1-10 OF 28 REFERENCES
    Adaptive Subgradient Methods for Online Learning and Stochastic Optimization
    • 6,418
    • PDF
    On the Convergence of Stochastic Gradient Descent with Adaptive Stepsizes
    • 74
    • PDF
    MetaGrad: Multiple Learning Rates in Online Learning
    • 37
    • PDF
    On the Convergence of Adam and Beyond
    • 972
    • PDF
    A survey of Algorithms and Analysis for Adaptive Online Learning
    • H. McMahan
    • Computer Science, Mathematics
    • J. Mach. Learn. Res.
    • 2017
    • 73
    • PDF
    Adam: A Method for Stochastic Optimization
    • 56,479
    • PDF
    Accelerating Online Convex Optimization via Adaptive Prediction
    • 18
    • PDF