A procedure for training recurrent networks

@article{Phan2013APF,
  title={A procedure for training recurrent networks},
  author={Manh Cong Phan and Mark H. Beale and Martin T. Hagan},
  journal={The 2013 International Joint Conference on Neural Networks (IJCNN)},
  year={2013},
  pages={1-8}
}
  • M. C. Phan, M. Beale, M. Hagan
  • Published 1 August 2013
  • Computer Science
  • The 2013 International Joint Conference on Neural Networks (IJCNN)
In this paper, we introduce a new procedure for efficient training of recurrent neural networks. The new procedure uses a batch training method based on a modified version of the Levenberg-Marquardt algorithm. The information of gradients of individual sequences is used to mitigate the effect of spurious valleys in the error surface of recurrent networks. The method is tested on the modeling and control of several physical systems. 
Enhanced recurrent network training
  • A. H. Jafari, M. Hagan
  • Computer Science
    2015 International Joint Conference on Neural Networks (IJCNN)
  • 2015
TLDR
New, more efficient, methods for training recurrent neural networks (RNNs) based on a new understanding of the error surfaces of RNNs, which increase the prediction horizons in a principled way that enables the search algorithms to avoid the spurious valleys.
Pronóstico del área de contacto de los neumáticos de un vehículo vía redes neuronales recurrentes
In this paper, a predictive soft sensor via a Recurrent Neural Network is proposed for sensing the area between the tires and the terrain, which is one of the most important issues in the automotive
Actuator failure‐tolerant control of an all‐thruster satellite in coupled translational and rotational motion using neural networks
The nonlinear model predictive control (MPC) approach is used to control the coupled translational‐rotational motion of an all‐thruster spacecraft when one of the actuators fails. In order to model
Wind Estimation Using Quadcopter Motion: A Machine Learning Approach

References

SHOWING 1-10 OF 12 REFERENCES
Error Surface of Recurrent Neural Networks
  • M. C. Phan, M. Hagan
  • Computer Science
    IEEE Transactions on Neural Networks and Learning Systems
  • 2013
TLDR
Two types of spurious valleys that appear in the error surfaces of recurrent networks are described, which are not affected by the desired network output or by the problem that the network is trying to solve.
Backpropagation Algorithms for a Broad Class of Dynamic Networks
TLDR
It is demonstrated that the BPTT algorithm is more efficient for gradient calculations, but the RTRL algorithm isMore efficient for Jacobian calculations.
New results on recurrent network training: unifying the algorithms and accelerating convergence
TLDR
An on-line version of the proposed algorithm, which is based on approximating the error gradient, has lower computational complexity in computing the weight update than the competing techniques for most typical problems and reaches the error minimum in a much smaller number of iterations.
Training feedforward networks with the Marquardt algorithm
TLDR
The Marquardt algorithm for nonlinear least squares is presented and is incorporated into the backpropagation algorithm for training feedforward neural networks and is found to be much more efficient than either of the other techniques when the network contains no more than a few hundred weights.
Recent advances in efficient learning of recurrent networks
TLDR
This tutorial gives an overview of this recent developments in efficient, biologically plausible recurrent informa- tion processing.
Spurious Valleys in the Error Surface of Recurrent Networks—Analysis and Avoidance
TLDR
It is shown that these error surfaces contain many spurious valleys, and it is demonstrated that the principle mechanism can be understood through the analysis of the roots of random polynomials.
Identification and control of dynamical systems using neural networks
TLDR
It is demonstrated that neural networks can be used effectively for the identification and control of nonlinear dynamical systems and the models introduced are practically feasible.
Neural network design
TLDR
This book, by the authors of the Neural Network Toolbox for MATLAB, provides a clear and detailed coverage of fundamental neural network architectures and learning rules, as well as methods for training them and their applications to practical problems.
Learning Recurrent Neural Networks with Hessian-Free Optimization
TLDR
This work solves the long-outstanding problem of how to effectively train recurrent neural networks on complex and difficult sequence modeling problems which may contain long-term data dependencies and offers a new interpretation of the generalized Gauss-Newton matrix of Schraudolph which is used within the HF approach of Martens.
An introduction to the use of neural networks in control systems
TLDR
The multilayer perceptron neural network is introduced and how it can be used for function approximation is described and several techniques for improving generalization are discussed.
...
1
2
...