Highly Influenced

17 Excerpts

- Published 1999

Gain adaptation algorithms for neural networks typically adjust learning rates by monitoring the correlation between successive gradients. Here we discuss the limitations of this approach, and develop an alternative by extending Sutton’s work on linear systems to the general, nonlinear case. The resulting online algorithms are computationally little more expensive than other acceleration techniques, do not assume statistical independence between successive training patterns, and do not require an arbitrary smoothing parameter. In our benchmark experiments, they consistently outperform other acceleration methods, and show remarkable robustness when faced with noni.i.d. sampling of the input space.

Showing 1-10 of 89 extracted citations

Highly Influenced

17 Excerpts

Highly Influenced

12 Excerpts

Highly Influenced

4 Excerpts

Highly Influenced

4 Excerpts

Highly Influenced

4 Excerpts

Highly Influenced

12 Excerpts

Highly Influenced

10 Excerpts

Highly Influenced

3 Excerpts

Highly Influenced

4 Excerpts

Highly Influenced

4 Excerpts

@inproceedings{Schraudolph1999LocalGA,
title={Local Gain Adaptation in Stochastic Gradient Descent},
author={Nicol N. Schraudolph},
year={1999}
}