Online Local Gain Adaptation for Multi-Layer Perceptrons

  title={Online Local Gain Adaptation for Multi-Layer Perceptrons},
  author={Nicol N. Schraudolph},
We introduce a new method for adapting the step size of each individual weight in a multi-layer perceptron trained by stochastic gradient descent. Our technique derives from the K1 algorithm for linear systems (Sutton, 1992b), which in turn is based on a diagonalized Kalman Filter. We expand upon Sutton’s work in two regards: K1 is a) extended to multi-layer perceptrons, and b) made more efficient by linearizing an exponentiation operation. The resulting elk1 (extended, linearized K1) algorithm… CONTINUE READING


Publications referenced by this paper.
Showing 1-10 of 20 references

Parameter adaptation in stochastic optimization

L. B. Almeida, T. Langlois, J. D. Amaral, A. Plakhov
Tech. rep., INESC, • 1998
View 9 Excerpts
Highly Influenced

Training neural networks using sequential-update forms of the extended Kalman filter

E. S. Plumer
Tech. rep. LA-UR-95-422, • 1995
View 4 Excerpts
Highly Influenced

Neuronale Netze als Entscheidungskalkül

H. G. Zimmermann
Neuronale Netze in der Ökonomie: Grundlagen und finanzwirtschaftliche Anwendungen, • 1994
View 5 Excerpts
Highly Influenced

Increased rates of convergence through learning rate adaptation

Neural Networks • 1988
View 4 Excerpts
Highly Influenced

Similar Papers

Loading similar papers…