Learning state space trajectories in recurrent neural networks

Abstract

A number of procedures are described for finding delta E/ delta W/sub ij/ where E is an error functional of the temporal trajectory of the states of a continuous recurrent network and w/sub ij/ are the weights of that network. Computing these quantities allows one to perform gradient descent in the weights to minimize E, so these procedures form the kernels of connectionist learning algorithms. Simulations in which networks are taught to move through limit cycles are shown, along with some empirical perturbation sensitivity tests. The author describes a number of elaborations of the basic idea, including mutable time delays and teacher forcing. He includes a complexity analysis of the various learning procedures discussed and analyzed. Temporally continuous recurrent networks seems particularly suited for temporally continuous domains, such as signal processing, control, and speech.<<ETX>>

DOI: 10.1162/neco.1989.1.2.263

Extracted Key Phrases

0204060'90'93'96'99'02'05'08'11'14'17
Citations per Year

788 Citations

Semantic Scholar estimates that this publication has 788 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Pearlmutter1989LearningSS, title={Learning state space trajectories in recurrent neural networks}, author={Barak A. Pearlmutter}, journal={International 1989 Joint Conference on Neural Networks}, year={1989}, pages={365-372 vol.2} }