# Bifurcations in the learning of recurrent neural networks

@article{Doya1992BifurcationsIT, title={Bifurcations in the learning of recurrent neural networks}, author={Kenji Doya}, journal={[Proceedings] 1992 IEEE International Symposium on Circuits and Systems}, year={1992}, volume={6}, pages={2777-2780 vol.6} }

Gradient descent algorithms in recurrent neural networks can have problems when the network dynamics experience bifurcations in the course of learning. The possible hazards caused by the bifurcations of the network dynamics and the learning equations are investigated. The roles of teacher forcing, preprogramming of network structures, and the approximate learning algorithms are discussed.<<ETX>>

## 153 Citations

### Bifurcations of Recurrent Neural Networks in Gradient Descent Learning

- Computer Science
- 1993

Some of the factors underlying successful training of recurrent networks are investigated, such as choice of initial connections, choice of input patterns, teacher forcing, and truncated learning equations.

### Exploring the nonlinear dynamic behavior of artificial neural networks

- Computer ScienceProceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)
- 1994

This paper explores the universal approximation capability exhibited by neural networks in the development of suitable architectures and associated training processes for nonlinear discrete-time dynamic system representation and investigates the dynamic behavior of a recurrent processing unit.

### Exploring the nonlinear dynamic behavior of artificial neural networks

- Computer Science
- 1994

This paper explores the universal approximation capability exhibited by neural networks in the development of suitable architectures and associated training processes for nonlinear discrete-time dynamic system representation and investigates the dynamic behavior of a recurrent processing unit.

### Adjoint Dynamics of Stable Limit Cycle Neural Networks

- Computer Science2019 53rd Asilomar Conference on Signals, Systems, and Computers
- 2019

Using a continuous time dynamical system interpretation of neural networks and backpropagation, it is shown that stable limit cycle neural networks have non-exploding gradients, and at least one effective nonvanishing gradient dimension.

### Modeling dynamical systems with recurrent neural networks

- Computer Science
- 1994

The phase- space learning method is proposed, which is a general framework with the iterated-prediction network as a dominant example, that overcomes the recurrent hidden unit problem and shows that phase-space learning is a useful framework, providing both practical algorithms and deeper understanding of recurrent neural networks.

### Reservoir Computing with Output Feedback

- EngineeringKI - Künstliche Intelligenz
- 2012

This thesis presents a dynamical system approach to learning forward and inverse models in associative recurrent neural networks that enable robust and efficient training of multi-stable dynamics with application to movement control in robotics.

### Dimension Reduction of Biological Neuron Models by Artificial Neural Networks

- Computer ScienceNeural Computation
- 1994

An artificial neural network approach to dimension reduction of dynamical systems is proposed and applied to conductance-based neuron models and revealed the bifurcations of the dynamical system underlying firing and bursting behaviors.

### Universality of Fully-Connected Recurrent Neural Networks

- Computer Science
- 1993

It is shown from the universality of multi-layer neural networks that any discretetime or continuous-time dynamical system can be approximated by discrete-time or continuous-time recurrent neural…

### Recurrent neural networks for temporal learning of time series

- Computer ScienceIEEE International Conference on Neural Networks
- 1993

The learning and performance behaviors of recurrent 3-layer perceptrons for time-dependent input and output data are studied and the Ring Array Processor is used to cope with the increased learning time.

## References

SHOWING 1-10 OF 19 REFERENCES

### Some experiments on learning stable network oscillations

- Computer Science1990 IJCNN International Joint Conference on Neural Networks
- 1990

It is shown that it is possible for standard sigmoidal unit networks to learn stable, collective oscillations involving tens of units and a biological network oscillator is model, showing that recurrent networks can help gain useful insights into the biological system.

### Memorizing oscillatory patterns in the analog neuron network

- Computer ScienceInternational 1989 Joint Conference on Neural Networks
- 1989

By combining adaptive neural oscillator (ANO) learning with the scheme of the associative memory network, multiple oscillatory waveforms can be stored in one neural network and can be selectively regenerated with the initial state of the network.

### APOLONN brings us to the real world: learning nonlinear dynamics and fluctuations in nature

- Computer Science1990 IJCNN International Joint Conference on Neural Networks
- 1990

The authors trained APOLONN (adaptive nonlinear pair oscillators with local connections) to learn the voice source waveforms, including fluctuations of amplitudes and periodicities, and trained it to generate waveforms with fluctuations.

### Experimental Analysis of the Real-time Recurrent Learning Algorithm

- Computer Science
- 1989

A series of simulation experiments are used to investigate the power and properties of the real-time recurrent learning algorithm, a gradient-following learning algorithm for completely recurrent networks running in continually sampled time.

### Dynamics and architecture for neural computation

- Computer ScienceJ. Complex.
- 1988

### Recurrent Network Model of the Neural Mechanism of Short-Term Active Memory

- Biology, PsychologyNeural Computation
- 1991

The learning-based model described here demonstrates that a mechanism using only the dynamic activity in recurrent networks is sufficient to account for the observed phenomena.

### Learning representations by back-propagating errors

- Computer ScienceNature
- 1986

Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.

### Finite State Automata and Simple Recurrent Networks

- Computer ScienceNeural Computation
- 1989

A network architecture introduced by Elman (1988) for predicting successive elements of a sequence and shows that long distance sequential contingencies can be encoded by the network even if only subtle statistical properties of embedded strings depend on the early information.

### A Dynamic Neural Network Model of Sensorimotor Transformations in the Leech

- BiologyNeural Computation
- 1990

A model of the local bending reflex was constructed using physiological and anatomical constraints to construct a model of interneurons in leech ganglia and the properties of the hidden units that emerged in the simulations matched those in the leech.

### Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks

- Computer ScienceNeural Computation
- 1992

It is shown that a recurrent, second-order neural network using a real-time, forward training algorithm readily learns to infer small regular grammars from positive and negative string training samples, and many of the neural net state machines are dynamically stable, that is, they correctly classify many long unseen strings.