A Local Learning Algorithm for Dynamic Feedforward and Recurrent Networks

@article{Schmidhuber1989ALL,
  title={A Local Learning Algorithm for Dynamic Feedforward and Recurrent Networks},
  author={J{\"u}rgen Schmidhuber},
  journal={Connection Science},
  year={1989},
  volume={1},
  pages={403-412}
}
Abstract Most known learning algorithms for dynamic neural networks in non-stationary environments need global computations to perform credit assignment. These algorithms either are not local in time or not local in space. Those algorithms which are local in both time and space usually cannot deal sensibly with ‘hidden units’. In contrast, as far as we can judge, learning rules in biological systems with many ‘hidden units’ are local in both space and time. In this paper we propose a parallel… 

Figures from this paper

Learning Algorithms for Networks with Internal and External Feedback
Making the World Di erentiable: On Using Self-Supervised Fully Recurrent Neural Networks for Dynamic Reinforcement Learning
TLDR
A general algorithm for a reinforcement learning neural network with internal and external feedback in a non-stationary reactive environment and how the algorithm can be augmented by dynamic curiosity and boredom is described.
Learning to Control Fast-weight Memories: an Alternative to Dynamic Recurrent Networks
TLDR
This paper describes alternative gradient-based systems consisting of two feed-forward nets which learn to deal with temporal sequences by using fast weights: the rst net learns to produce context dependent weight changes for the second net whose weights may vary very quickly.
Learning Algorithms for Networks with Internal and External Feedback
TLDR
This paper gives an overview of some novel algorithms for reinforcement learning in non-stationary possibly reactive environments and critisizes methods based on system identiication and adaptive critics, and describes an adaptive subgoal generator.
Learning to Control Fast-Weight Memories: An Alternative to Dynamic Recurrent Networks
TLDR
This paper describes an alternative class of gradient-based systems consisting of two feedforward nets that learn to deal with temporal sequences using fast weights: the first net learns to produce context-dependent weight changes for the second net whose weights may vary very quickly.
Modeling dynamical systems with recurrent neural networks
TLDR
The phase- space learning method is proposed, which is a general framework with the iterated-prediction network as a dominant example, that overcomes the recurrent hidden unit problem and shows that phase-space learning is a useful framework, providing both practical algorithms and deeper understanding of recurrent neural networks.
Locally Connected Recurrent
TLDR
Both tasks show that RRN needs a much shorter training time and the performance of RRN is comparable to that of FRN.
New architectures for very deep learning
TLDR
This thesis develops new architectures that, for the first time, allow very deep networks to be optimized efficiently and reliably and addresses two key issues that hamper credit assignment in neural networks: cross-pattern interference and vanishing gradients.
Dynamic recurrent neural networks: a dynamical analysis
TLDR
The results highlight the improvements in the network dynamics due to the introduction of adaptative time constants and indicate that dynamic recurrent neural networks can bring new powerful features in the field of neural computing.
Networks adjusting networks
...
...

References

SHOWING 1-10 OF 24 REFERENCES
A learning algorithm for analog, fully recurrent neural networks
  • M. Gherrity
  • Computer Science
    International 1989 Joint Conference on Neural Networks
  • 1989
A learning algorithm for recurrent neural networks is derived. This algorithm allows a network to learn specified trajectories in state space in response to various input sequences. The network
The Neural Bucket Brigade
TLDR
It is argued that a learning mechanism for temporal input/output relations ought to depend solely on computations local in both space and time, and that no teacher should be required to indicate the starts and ends of relevant sequences to the network.
Learning State Space Trajectories in Recurrent Neural Networks
TLDR
A procedure for finding E/wij, where E is an error functional of the temporal trajectory of the states of a continuous recurrent network and wij are the weights of that network, which seems particularly suited for temporally continuous domains.
Feature discovery by competitive learning
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
The fundamental principles, basic mechanisms, and formal analyses involved in the development of parallel distributed processing (PDP) systems are presented in individual chapters contributed by
Static and Dynamic Error Propagation Networks with Application to Speech Coding
TLDR
This paper presents a generalisation of error propagation nets to deal with time varying, or dynamic patterns, and three possible architectures are explored.
Self-Organization and Associative Memory
TLDR
The purpose and nature of Biological Memory, as well as some of the aspects of Memory Aspects, are explained.
Some Studies in Machine Learning Using the Game of Checkers
  • A. Samuel
  • Computer Science
    IBM J. Res. Dev.
  • 1959
TLDR
The studies reported here have been concerned with the programming of a digital computer to behave in a way which would be described as involving the process of learning.
Recurrent networks adjusted by adaptive critics
...
...