Gradient calculations for dynamic recurrent neural networks: a survey

Abstract

Surveys learning algorithms for recurrent neural networks with hidden units and puts the various techniques into a common framework. The authors discuss fixed point learning algorithms, namely recurrent backpropagation and deterministic Boltzmann machines, and nonfixed point algorithms, namely backpropagation through time, Elman's history cutoff, and Jordan's output feedback architecture. Forward propagation, an on-line technique that uses adjoint equations, and variations thereof, are also discussed. In many cases, the unified presentation leads to generalizations of various sorts. The author discusses advantages and disadvantages of temporally continuous neural networks in contrast to clocked ones continues with some "tricks of the trade" for training, using, and simulating continuous time and recurrent neural networks. The author presents some simulations, and at the end, addresses issues of computational complexity and learning speed.

DOI: 10.1109/72.410363

Extracted Key Phrases

1 Figure or Table

0204060'97'99'01'03'05'07'09'11'13'15'17
Citations per Year

673 Citations

Semantic Scholar estimates that this publication has 673 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Pearlmutter1995GradientCF, title={Gradient calculations for dynamic recurrent neural networks: a survey}, author={Barak A. Pearlmutter}, journal={IEEE transactions on neural networks}, year={1995}, volume={6 5}, pages={1212-28} }