Improving the learning rate of back-propagation with the gradient reuse algorithm

  title={Improving the learning rate of back-propagation with the gradient reuse algorithm},
  author={D. R. Hush and John M. Salas},
  journal={IEEE 1988 International Conference on Neural Networks},
  pages={441-447 vol.1}
A simple method for improving the learning rate of the backpropagation algorithm is described and analyzed. The method is referred to as the gradient reuse algorithm (GRA). The basic idea is that ingredients which are computed using backpropagation are reused several times until the resulting weight updates no longer lead to a reduction in error. It is shown that convergence speedup is a function of the reuse rate, and that the reuse rate can be controlled by using a dynamic convergence… CONTINUE READING


Publications citing this paper.
Showing 1-10 of 23 extracted citations


Publications referenced by this paper.
Showing 1-5 of 5 references

Optimal Algorithms for Adaptive Networks: Second Order Back Propagation, Second Order Direct Propagation, and Second Order Hebbian Learning,

  • D. B. Parker
  • Proc. IEEE 1st International Conference on Neural…
  • 1987

, " Accelerated Learning Using the Generalized Delta Rule

  • D. DahlE.

, " Learning in Networks Is Hard

  • S. Judd

Similar Papers

Loading similar papers…