A Fixed Size Storage O(n3) Time Complexity Learning Algorithm for Fully Recurrent Continually Running Networks

@article{Schmidhuber1992AFS,
  title={A Fixed Size Storage O(n3) Time Complexity Learning Algorithm for Fully Recurrent Continually Running Networks},
  author={J{\"u}rgen Schmidhuber},
  journal={Neural Computation},
  year={1992},
  volume={4},
  pages={243-248}
}
  • J. Schmidhuber
  • Published 1 March 1992
  • Computer Science
  • Neural Computation
The real-time recurrent learning (RTRL) algorithm (Robinson and Fallside 1987; Williams and Zipser 1989) requires O(n4) computations per time step, where n is the number of noninput units. I describe a method suited for on-line learning that computes exactly the same gradient and requires fixed-size storage of the same order but has an average time complexity per time step of O(n3). 
TRTRL: A Localized Resource-Efficient Learning Algorithm for Recurrent Neural Netowrks
  • D. Budik, I. Elhanany
  • Computer Science
    2006 49th IEEE International Midwest Symposium on Circuits and Systems
  • 2006
TLDR
An efficient, low-complexity online learning algorithm for recurrent neural networks based on the real-time recurrent learning (RTRL) algorithm, whereby the sensitivity set of each neuron is reduced to weights associated either with its input or ouput links.
Fast and Scalable Recurrent Neural Network Learning based on Stochastic Meta-Descent
This paper presents an efficient and scalable online learning algorithm for recurrent neural networks (RNNs). The approach is based on the real-time recurrent learning (RTRL) algorithm, whereby the
Learning to Control Fast-Weight Memories: An Alternative to Dynamic Recurrent Networks
TLDR
This paper describes an alternative class of gradient-based systems consisting of two feedforward nets that learn to deal with temporal sequences using fast weights: the first net learns to produce context-dependent weight changes for the second net whose weights may vary very quickly.
A Fast and Scalable Recurrent Neural Network Based on Stochastic Meta Descent
TLDR
An efficient and scalable online learning algorithm for recurrent neural networks (RNNs) based on the real-time recurrent learning (RTRL) algorithm, whereby the sensitivity set of each neuron is reduced to weights associated with either its input or output links.
Locally Connected Recurrent
TLDR
Both tasks show that RRN needs a much shorter training time and the performance of RRN is comparable to that of FRN.
Reducing the Ratio Between Learning Complexity and Number of Time Varying Variables in Fully Recurrent Nets
Let m be the number of time-varying variables for storing temporal events in a fully recurrent sequence processing network. Let Rtime be the ratio between the number of operations per time step (for
Learning Complex, Extended Sequences Using the Principle of History Compression
TLDR
A simple principle for reducing the descriptions of event sequences without loss of information is introduced and this insight leads to the construction of neural architectures that learn to divide and conquer by recursively decomposing sequences.
Backpropagation-decorrelation: online recurrent learning with O(N) complexity
  • J. Steil
  • Computer Science
    2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541)
  • 2004
TLDR
A new learning rule for fully recurrent neural networks is introduced which combines important principles: one-step backpropagation of errors and the usage of temporal memory in the network dynamics by means of decorrelation of activations.
An analog VLSI recurrent neural network learning a continuous-time trajectory
TLDR
This work presents an alternative implementation in analog VLSI, which employs a stochastic perturbation algorithm to observe the gradient of the error index directly on the network in random directions of the parameter space, thereby avoiding the tedious task of deriving the gradient from an explicit model of the network dynamics.
A Resource Efficient Localized Recurrent Neural Network Architecture and Learning Algorithm
TLDR
This thesis introduces TRTRL, an e¢ cient, low-complexity online learning algorithm for recurrent neural networks based on the real-time recurrent learning (RTRL) algorithm, whereby the sensitivity set of each neuron is reduced to weights associated either with its input or output links.
...
...

References

SHOWING 1-10 OF 12 REFERENCES
A Subgrouping Strategy that Reduces Complexity and Speeds Up Learning in Recurrent Networks
  • D. Zipser
  • Computer Science
    Neural Computation
  • 1989
TLDR
A technique is described here for reducing the amount of computation required by RTRL without changing the connectivity of the networks by dividing the original network into subnets for the purpose of error propagation while leaving them undivided for activity propagation.
Experimental Analysis of the Real-time Recurrent Learning Algorithm
TLDR
A series of simulation experiments are used to investigate the power and properties of the real-time recurrent learning algorithm, a gradient-following learning algorithm for completely recurrent networks running in continually sampled time.
Learning Complex, Extended Sequences Using the Principle of History Compression
TLDR
A simple principle for reducing the descriptions of event sequences without loss of information is introduced and this insight leads to the construction of neural architectures that learn to divide and conquer by recursively decomposing sequences.
An Efficient Gradient-Based Algorithm for On-Line Training of Recurrent Network Trajectories
A novel variant of the familiar backpropagation-through-time approach to training recurrent networks is described. This algorithm is intended to be used on arbitrary recurrent networks that run
A learning algorithm for analog, fully recurrent neural networks
  • M. Gherrity
  • Computer Science
    International 1989 Joint Conference on Neural Networks
  • 1989
A learning algorithm for recurrent neural networks is derived. This algorithm allows a network to learn specified trajectories in state space in response to various input sequences. The network
Learning State Space Trajectories in Recurrent Neural Networks
TLDR
A procedure for finding E/wij, where E is an error functional of the temporal trajectory of the states of a continuous recurrent network and wij are the weights of that network, which seems particularly suited for temporally continuous domains.
Time Dependent Adaptive Neural Networks
A comparison of algorithms that minimize error functions to train the trajectories of recurrent networks, reveals how complexity is traded off for causality. These algorithms are also related to
Adaptive Decomposition Of Time
TLDR
Design principles for unsupervised detection of regularities (like causal relationships) in temporal sequences and the principles of the rst neural sequencèchunker, which collapses a self-organizing multi-level predictor hierarchy into a single recurrent network are introduced.
Adaptive decomposition of time
Complexity of exact gradient computation algorithms for recurrent neural networks
  • Tech. Rep. NU-CCS-89-27, Boston: Northeastern University, College of Computer Science. Williams, R. J., and Peng, J. 1990. An efficient gradient-based algorithm for on-line training of recurrent network trajectories.
  • 1989
...
...