ADASECANT: Robust Adaptive Secant Method for Stochastic Gradient


Stochastic gradient algorithms have been the main focus of large-scale learning problems and they led to important successes in machine learning. The convergence of SGD depends on the careful choice of learning rate and the amount of the noise in stochastic estimates of the gradients. In this paper, we propose a new adaptive learning rate algorithm, which utilizes curvature information for automatically tuning the learning rates. The information about the element-wise curvature of the loss function is estimated from the local statistics of the stochastic first order gradients. We further propose a new variance reduction technique to speed up the convergence. In our preliminary experiments with deep neural networks, we obtained better performance compared to the popular stochastic gradient algorithms.

Extracted Key Phrases

2 Figures and Tables

Cite this paper

@article{Glehre2014ADASECANTRA, title={ADASECANT: Robust Adaptive Secant Method for Stochastic Gradient}, author={Çaglar G{\"{u}lçehre and Yoshua Bengio}, journal={CoRR}, year={2014}, volume={abs/1412.7419} }