A Local Learning Algorithm for Dynamic Feedforward and Recurrent Networks
@article{Schmidhuber1989ALL, title={A Local Learning Algorithm for Dynamic Feedforward and Recurrent Networks}, author={J{\"u}rgen Schmidhuber}, journal={Connection Science}, year={1989}, volume={1}, pages={403-412} }
Abstract Most known learning algorithms for dynamic neural networks in non-stationary environments need global computations to perform credit assignment. These algorithms either are not local in time or not local in space. Those algorithms which are local in both time and space usually cannot deal sensibly with ‘hidden units’. In contrast, as far as we can judge, learning rules in biological systems with many ‘hidden units’ are local in both space and time. In this paper we propose a parallel…
Figures from this paper
99 Citations
Making the World Di erentiable: On Using Self-Supervised Fully Recurrent Neural Networks for Dynamic Reinforcement Learning
- Computer Science
- 1990
A general algorithm for a reinforcement learning neural network with internal and external feedback in a non-stationary reactive environment and how the algorithm can be augmented by dynamic curiosity and boredom is described.
Learning to Control Fast-weight Memories: an Alternative to Dynamic Recurrent Networks
- Computer Science
- 1991
This paper describes alternative gradient-based systems consisting of two feed-forward nets which learn to deal with temporal sequences by using fast weights: the rst net learns to produce context dependent weight changes for the second net whose weights may vary very quickly.
Learning Algorithms for Networks with Internal and External Feedback
- Computer Science
- 1990
This paper gives an overview of some novel algorithms for reinforcement learning in non-stationary possibly reactive environments and critisizes methods based on system identiication and adaptive critics, and describes an adaptive subgoal generator.
Learning to Control Fast-Weight Memories: An Alternative to Dynamic Recurrent Networks
- Computer ScienceNeural Computation
- 1992
This paper describes an alternative class of gradient-based systems consisting of two feedforward nets that learn to deal with temporal sequences using fast weights: the first net learns to produce context-dependent weight changes for the second net whose weights may vary very quickly.
Modeling dynamical systems with recurrent neural networks
- Computer Science
- 1994
The phase- space learning method is proposed, which is a general framework with the iterated-prediction network as a dominant example, that overcomes the recurrent hidden unit problem and shows that phase-space learning is a useful framework, providing both practical algorithms and deeper understanding of recurrent neural networks.
Locally Connected Recurrent
- Computer Science
- 1995
Both tasks show that RRN needs a much shorter training time and the performance of RRN is comparable to that of FRN.
New architectures for very deep learning
- Computer Science
- 2018
This thesis develops new architectures that, for the first time, allow very deep networks to be optimized efficiently and reliably and addresses two key issues that hamper credit assignment in neural networks: cross-pattern interference and vanishing gradients.
Dynamic recurrent neural networks: a dynamical analysis
- Computer ScienceIEEE Trans. Syst. Man Cybern. Part B
- 1996
The results highlight the improvements in the network dynamics due to the introduction of adaptative time constants and indicate that dynamic recurrent neural networks can bring new powerful features in the field of neural computing.
References
SHOWING 1-10 OF 24 REFERENCES
A learning algorithm for analog, fully recurrent neural networks
- Computer ScienceInternational 1989 Joint Conference on Neural Networks
- 1989
A learning algorithm for recurrent neural networks is derived. This algorithm allows a network to learn specified trajectories in state space in response to various input sequences. The network…
The Neural Bucket Brigade
- Computer Science
- 1989
It is argued that a learning mechanism for temporal input/output relations ought to depend solely on computations local in both space and time, and that no teacher should be required to indicate the starts and ends of relevant sequences to the network.
Learning State Space Trajectories in Recurrent Neural Networks
- Computer ScienceNeural Computation
- 1989
A procedure for finding E/wij, where E is an error functional of the temporal trajectory of the states of a continuous recurrent network and wij are the weights of that network, which seems particularly suited for temporally continuous domains.
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
- Computer Science
- 1986
The fundamental principles, basic mechanisms, and formal analyses involved in the development of parallel distributed processing (PDP) systems are presented in individual chapters contributed by…
Static and Dynamic Error Propagation Networks with Application to Speech Coding
- Computer ScienceNIPS
- 1987
This paper presents a generalisation of error propagation nets to deal with time varying, or dynamic patterns, and three possible architectures are explored.
Self-Organization and Associative Memory
- Computer Science
- 1988
The purpose and nature of Biological Memory, as well as some of the aspects of Memory Aspects, are explained.
Some Studies in Machine Learning Using the Game of Checkers
- Computer ScienceIBM J. Res. Dev.
- 1959
The studies reported here have been concerned with the programming of a digital computer to behave in a way which would be described as involving the process of learning.