New Learning Automata Based Algorithms for Adaptation of Backpropagation Algorithm Parameters

@article{Meybodi2002NewLA,
  title={New Learning Automata Based Algorithms for Adaptation of Backpropagation Algorithm Parameters},
  author={Mohammad Reza Meybodi and H. Beigy},
  journal={International journal of neural systems},
  year={2002},
  volume={12 1},
  pages={
          45-67
        }
}
  • M. Meybodi, H. Beigy
  • Published 1 February 2002
  • Computer Science
  • International journal of neural systems
One popular learning algorithm for feedforward neural networks is the backpropagation (BP) algorithm which includes parameters, learning rate (eta), momentum factor (alpha) and steepness parameter (lambda). The appropriate selections of these parameters have large effects on the convergence of the algorithm. Many techniques that adaptively adjust these parameters have been developed to increase speed of convergence. In this paper, we shall present several classes of learning automata based… 
A learning automata-based algorithm for determination of the number of hidden units for three-layer neural networks
TLDR
This article presents an algorithm based on the proposed learning automaton, called survival algorithm, for determination of the number of hidden units of three layers neural networks, which has been tested on a number of problems and shown through simulations that networks generated are near optimal.
Learning Automata Based Competition Scheme to Train Deep Neural Networks
TLDR
A new basic element to form deep neural networks, called learning automata competition unit (LCU), which can facilitate competition in a group of neural units and gradually select the better trained neural units during training.
Modeling Ant Colony Algorithms Using Learning Automata
TLDR
This paper shows that ant colony algorithms can be modeled by a group of cooperating Learning Automata and then using a set of set of cooperating learning automata an algorithm for solving the routing problem in computer networks has been proposed.
Open Synchronous Cellular Learning Automata
TLDR
It is shown that for a class of rules called commutative rules, the open cellular learning automata in stationary external environments converges to a stable and compatible configuration and the application of this new model to image segmentation has been presented.
Distributed learning automata-based scheme for classification using novel pursuit scheme
TLDR
A novel pursuit LA is developed which can be seen as the counterpart of the family of pursuit LA developed for stochastic environments, and is able to perfectly separate both simple and complex patterns outperforming existing classifiers, without the need of any “kernel trick”.
A Mathematical Framework for Cellular Learning Automata
TLDR
This paper first provides a mathematical framework for cellular learning automata and then studies its convergence behavior, showing that for a class of rules, called commutative rules, the cellularlearning automata converges to a stable and compatible configuration.
Recent advances in Learning Automata systems
  • B. Oommen
  • Computer Science
    2010 2nd International Conference on Computer Engineering and Technology
  • 2010
TLDR
An overview of the field of Stochastic Learning Automata is presented, and it is explained how LA can be designed by discretizing the probability space, and the design and analysis of both continuous and discretized LA are described.
St Reading Open Synchronous Cellular Learning Automata
TLDR
This paper introduces open cellular learning automata and then study its steady state behavior, and shows that for a class of rules called commutative rules, the open cellularlearning automata in stationary external environments converges to a stable and compatible configuration.
A learning automata-based adaptive uniform fractional guard channel algorithm
TLDR
The proposed algorithm uses a learning automaton to specify the acceptance/rejection of incoming new calls and it is shown that the given adaptive algorithm converges to an equilibrium point which is optimal for uniform fractional channel policy.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 94 REFERENCES
An accelerated learning algorithm for multilayer perceptron networks
TLDR
An accelerated learning algorithm (ABP-adaptive back propagation) is proposed for the supervised training of multilayer perceptron networks with superior convergence speed for analog problems only, as compared to other competing methods, as well as reduced sensitivity to algorithm step size parameter variations.
Methods to speed up error back-propagation learning algorithm
TLDR
Modification to the EBP algorithm in which the gradients are rescaled at every layer helped to improve the performance, and use of expected output of a neuron instead of actual output for correcting weights improved performance of the momentum strategy.
Optimum learning rate for backpropagation neural networks
TLDR
An optimum, time-varying learning rate for multilayer BP networks is analytically derived and results show that training time can be reduced significantly while not causing any oscillations during the training process.
On the Problem of Local Minima in Backpropagation
  • M. Gori, A. Tesi
  • Computer Science
    IEEE Trans. Pattern Anal. Mach. Intell.
  • 1992
TLDR
A theoretical framework for backpropagation (BP) is proposed and it is proven in particular that the convergence holds if the classes are linearly separable and that multilayered neural networks (MLNs) exceed perceptrons in generalization to new examples.
Pattern-recognizing stochastic learning automata
  • A. Barto, P. Anandan
  • Computer Science
    IEEE Transactions on Systems, Man, and Cybernetics
  • 1985
A class of learning tasks is described that combines aspects of learning automation tasks and supervised learning pattern-classification tasks. These tasks are called associative reinforcement
Accelerating the convergence of the back-propagation method
TLDR
Considering the selection of weights in neural nets as a problem in classical nonlinear optimization theory, the rationale for algorithms seeking only those weights that produce the globally minimum error is reviewed and rejected.
A direct adaptive method for faster backpropagation learning: the RPROP algorithm
TLDR
A learning algorithm for multilayer feedforward networks, RPROP (resilient propagation), is proposed that performs a local adaptation of the weight-updates according to the behavior of the error function to overcome the inherent disadvantages of pure gradient-descent.
Increased rates of convergence through learning rate adaptation
  • R. Jacobs
  • Computer Science
    Neural Networks
  • 1988
Learning Automata: Theory and Applications
TLDR
The connection between two level adaptive control and bilinear programming problem and two level hierarchical system of learning automata using a projectional stochastic approximation algorithm is studied.
...
1
2
3
4
5
...