Learning in the Recurrent Random Neural Network

@article{Gelenbe1993LearningIT,
  title={Learning in the Recurrent Random Neural Network},
  author={E. Gelenbe},
  journal={Neural Computation},
  year={1993},
  volume={5},
  pages={154-164}
}
  • E. Gelenbe
  • Published 1993
  • Computer Science
  • Neural Computation
The capacity to learn from examples is one of the most desirable features of neural network models. We present a learning algorithm for the recurrent random network model (Gelenbe 1989, 1990) using gradient descent of a quadratic error function. The analytical properties of the model lead to a "backpropagation" type algorithm that requires the solution of a system of n linear and n nonlinear equations each time the n-neuron network "learns" a new input-output pair. 
Contrastive Learning in Random Neural Networks and its Relation to Gradient-Descent Learning
TLDR
This work applies Contrastive Hebbian Learning to the recurrent Random Neural Network model and shows that the resulting weight changes are a first order approximation to the gradient-descent algorithm for quadratic error minimization when overall firing rates are constant. Expand
The Random Neural Network: A Survey
TLDR
A review of the theory, extension models, learning algorithms and applications of the RNN, which has been applied in a variety of areas including pattern recognition, classification, image processing, combinatorial optimization and communication systems. Expand
The Multilayer Random Neural Network
TLDR
An extended model of the random neural networks, whose architecture is multi-feedback, is proposed and its use in an encryption mechanism where each layer is responsible of a part of the encryption or decryption process is tested. Expand
STRUCTURE OPTIMIZATION OF THE RECURRENT RANDOM NEURAL NETWORK
The RNN is a recurrent fully connected neural network model inspired by the spiking behaviour of biophysical neurons. Various learning algorithms (including gradient, reinforcement and associative)Expand
A more powerful random neural network model in supervised learning applications
  • S. Basterrech, G. Rubino
  • Computer Science
  • 2013 International Conference on Soft Computing and Pattern Recognition (SoCPaR)
  • 2013
TLDR
A modification of the classic model obtained by extending the set of adjustable parameters is presented, which increases the potential of the RNN model in supervised learning tasks keeping the same network topology and the same time complexity of the algorithm. Expand
Recognition algorithm using evolutionary learning on the random neural networks
TLDR
The evolutionary learning is based in a hybrid algorithm that trains the random neural network by integrating a genetic algorithm with the gradient descent rule-based learning algorithm of therandom neural network. Expand
Learning in the multiple class random neural network
TLDR
This paper introduces a learning algorithm which applies both to recurrent and feedforward multiple signal class random neural networks (MCRNNs) based on gradient descent optimization of a cost function, and applies it to color texture modeling (learning), based on learning the weights of a recurrent network directly from the color texture image. Expand
Learning in Genetic Algorithms
TLDR
This paper introduces a mathematical framework concerning the manner in which genetic algorithms can learn, and shows that gradient descent can be used in this frameork as well. Expand
Function approximation with spiked random networks
TLDR
A feedforward Bipolar GNN model which has both "positive and negative neurons" in the output layer is considered, and it is proved that the BGNN is a universal function approximator. Expand
Function Approximation by Random Neural Networks with a Bounded Number of Layers
TLDR
This paper uses two extensions of the Gelenbe random neural network GNN to show that the feedforward CGNN and the BGNN with s hidden layers total of s layers can uniformly approximate continuous functions of s variables. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 19 REFERENCES
Recurrent Backpropagation and the Dynamical Approach to Adaptive Neural Computation
  • F. Pineda
  • Computer Science
  • Neural Computation
  • 1989
TLDR
It is now possible to efficiently compute the error gradients for networks that have temporal dynamics, which opens applications to a host of problems in systems identification and control. Expand
Learning State Space Trajectories in Recurrent Neural Networks
TLDR
A procedure for finding E/wij, where E is an error functional of the temporal trajectory of the states of a continuous recurrent network and wij are the weights of that network, which seems particularly suited for temporally continuous domains. Expand
Generalization of Back propagation to Recurrent and Higher Order Neural Networks
TLDR
A general method for deriving backpropagation algorithms for networks with recurrent and higher order networks and to a constrained dynamical system for training a content addressable memory. Expand
Stability of the Random Neural Network Model
  • E. Gelenbe
  • Mathematics, Computer Science
  • Neural Computation
  • 1990
TLDR
It is shown that whenever the solution to the signal flow equations of the Random Network exists, it is unique and therefore that the network has a well-defined steady-state behavior. Expand
RECURRENT AND FEEDFORWARD BACKPROPAGATION: PERFORMANCE STUDIES
Based on a unified description of neural algorithms for time independent pattern recognition, we discuss the generalization abilities of 3-layer perceptrons for recurrent and feedforwardExpand
A Learning Algorithm for Boltzmann Machines
TLDR
A general parallel search method is described, based on statistical mechanics, and it is shown how it leads to a general learning rule for modifying the connection strengths so as to incorporate knowledge about a task domain in an efficient way. Expand
Random Neural Networks with Negative and Positive Signals and Product Form Solution
  • E. Gelenbe
  • Mathematics, Computer Science
  • Neural Computation
  • 1989
TLDR
A new class of random neural networks in which signals are either negative or positive, and this model, with exponential signal emission intervals, Poisson external signal arrivals, and Markovian signal movements between neurons, has a product form leading to simple analytical expressions for the system state. Expand
Neural net algorithms that learn in polynomial time from examples and queries
  • E. Baum
  • Computer Science, Medicine
  • IEEE Trans. Neural Networks
  • 1991
TLDR
The author's algorithm is proved to PAC learn in polynomial time the class of target functions defined by layered, depth two, threshold nets having n inputs connected to k hidden threshold units connected to one or more output units, provided k=/<4. Expand
Learning internal representations by error propagation
This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion
Minimum cost graph covering with the random
  • 1992
...
1
2
...