New Learning Automata Based Algorithms for Adaptation of Backpropagation Algorithm Parameters
@article{Meybodi2002NewLA,
title={New Learning Automata Based Algorithms for Adaptation of Backpropagation Algorithm Parameters},
author={Mohammad Reza Meybodi and H. Beigy},
journal={International journal of neural systems},
year={2002},
volume={12 1},
pages={
45-67
}
}One popular learning algorithm for feedforward neural networks is the backpropagation (BP) algorithm which includes parameters, learning rate (eta), momentum factor (alpha) and steepness parameter (lambda). The appropriate selections of these parameters have large effects on the convergence of the algorithm. Many techniques that adaptively adjust these parameters have been developed to increase speed of convergence. In this paper, we shall present several classes of learning automata based…
Figures, Tables, and Topics from this paper

figure 1 
figure 2 
table 2 
figure 3 
table 3 
figure 4 
table 4 
figure 5 
figure 6 
figure 7 
figure 8 
figure 9 
figure 12 
figure 15 
figure 16 
figure 17 
figure 18 
figure 19 
figure 20 
figure 21 
figure 22 
figure 23 
figure 24 
figure 25 
figure 26 
figure 27 
figure 28 
figure 29 
figure 30 
figure 31 
figure 32 
figure 33 
figure 34 
figure 35 
figure 36 
figure 37 
figure 38 
figure 39 
figure 41 
figure 42 
figure 43 
figure 44 
figure 45 
figure 46 
figure 47 
figure 49
72 Citations
A learning automata-based algorithm for determination of the number of hidden units for three-layer neural networks
- Computer ScienceInt. J. Syst. Sci.
- 2009
This article presents an algorithm based on the proposed learning automaton, called survival algorithm, for determination of the number of hidden units of three layers neural networks, which has been tested on a number of problems and shown through simulations that networks generated are near optimal.
Learning Automata Based Competition Scheme to Train Deep Neural Networks
- Computer ScienceIEEE Transactions on Emerging Topics in Computational Intelligence
- 2020
A new basic element to form deep neural networks, called learning automata competition unit (LCU), which can facilitate competition in a group of neural units and gradually select the better trained neural units during training.
A note on learning automata-based schemes for adaptation of BP parameters
- Computer ScienceNeurocomputing
- 2002
Modeling Ant Colony Algorithms Using Learning Automata
- Computer Science
- 2007
This paper shows that ant colony algorithms can be modeled by a group of cooperating Learning Automata and then using a set of set of cooperating learning automata an algorithm for solving the routing problem in computer networks has been proposed.
Open Synchronous Cellular Learning Automata
- Computer Science, BiologyAdv. Complex Syst.
- 2007
It is shown that for a class of rules called commutative rules, the open cellular learning automata in stationary external environments converges to a stable and compatible configuration and the application of this new model to image segmentation has been presented.
Distributed learning automata-based scheme for classification using novel pursuit scheme
- Computer ScienceApplied Intelligence
- 2020
A novel pursuit LA is developed which can be seen as the counterpart of the family of pursuit LA developed for stochastic environments, and is able to perfectly separate both simple and complex patterns outperforming existing classifiers, without the need of any “kernel trick”.
A Mathematical Framework for Cellular Learning Automata
- Computer ScienceAdv. Complex Syst.
- 2004
This paper first provides a mathematical framework for cellular learning automata and then studies its convergence behavior, showing that for a class of rules, called commutative rules, the cellularlearning automata converges to a stable and compatible configuration.
Recent advances in Learning Automata systems
- Computer Science2010 2nd International Conference on Computer Engineering and Technology
- 2010
An overview of the field of Stochastic Learning Automata is presented, and it is explained how LA can be designed by discretizing the probability space, and the design and analysis of both continuous and discretized LA are described.
St Reading Open Synchronous Cellular Learning Automata
- Computer Science, Biology
- 2003
This paper introduces open cellular learning automata and then study its steady state behavior, and shows that for a class of rules called commutative rules, the open cellularlearning automata in stationary external environments converges to a stable and compatible configuration.
A learning automata-based adaptive uniform fractional guard channel algorithm
- Computer ScienceThe Journal of Supercomputing
- 2014
The proposed algorithm uses a learning automaton to specify the acceptance/rejection of incoming new calls and it is shown that the given adaptive algorithm converges to an equilibrium point which is optimal for uniform fractional channel policy.
References
SHOWING 1-10 OF 94 REFERENCES
An accelerated learning algorithm for multilayer perceptron networks
- Computer ScienceIEEE Trans. Neural Networks
- 1994
An accelerated learning algorithm (ABP-adaptive back propagation) is proposed for the supervised training of multilayer perceptron networks with superior convergence speed for analog problems only, as compared to other competing methods, as well as reduced sensitivity to algorithm step size parameter variations.
Speed up learning and network optimization with extended back propagation
- Computer ScienceNeural Networks
- 1993
Methods to speed up error back-propagation learning algorithm
- Computer ScienceCSUR
- 1995
Modification to the EBP algorithm in which the gradients are rescaled at every layer helped to improve the performance, and use of expected output of a neuron instead of actual output for correcting weights improved performance of the momentum strategy.
Optimum learning rate for backpropagation neural networks
- Computer ScienceProceedings of Canadian Conference on Electrical and Computer Engineering
- 1993
An optimum, time-varying learning rate for multilayer BP networks is analytically derived and results show that training time can be reduced significantly while not causing any oscillations during the training process.
On the Problem of Local Minima in Backpropagation
- Computer ScienceIEEE Trans. Pattern Anal. Mach. Intell.
- 1992
A theoretical framework for backpropagation (BP) is proposed and it is proven in particular that the convergence holds if the classes are linearly separable and that multilayered neural networks (MLNs) exceed perceptrons in generalization to new examples.
Pattern-recognizing stochastic learning automata
- Computer ScienceIEEE Transactions on Systems, Man, and Cybernetics
- 1985
A class of learning tasks is described that combines aspects of learning automation tasks and supervised learning pattern-classification tasks. These tasks are called associative reinforcement…
Accelerating the convergence of the back-propagation method
- Computer ScienceBiological Cybernetics
- 2004
Considering the selection of weights in neural nets as a problem in classical nonlinear optimization theory, the rationale for algorithms seeking only those weights that produce the globally minimum error is reviewed and rejected.
A direct adaptive method for faster backpropagation learning: the RPROP algorithm
- Computer ScienceIEEE International Conference on Neural Networks
- 1993
A learning algorithm for multilayer feedforward networks, RPROP (resilient propagation), is proposed that performs a local adaptation of the weight-updates according to the behavior of the error function to overcome the inherent disadvantages of pure gradient-descent.
Increased rates of convergence through learning rate adaptation
- Computer ScienceNeural Networks
- 1988
Learning Automata: Theory and Applications
- Computer Science
- 1994
The connection between two level adaptive control and bilinear programming problem and two level hierarchical system of learning automata using a projectional stochastic approximation algorithm is studied.