An optimized recursive learning algorithm for three-layer feedforward neural networks for mimo nonlinear system identifications

@article{Sha2011AnOR,
  title={An optimized recursive learning algorithm for three-layer feedforward neural networks for mimo nonlinear system identifications},
  author={Daohang Sha and Vladimir B. Bajic},
  journal={ArXiv},
  year={2011},
  volume={abs/1004.1997}
}
Back-propagation with gradient method is the most popular learning algorithm for feed-forward neural networks. However, it is critical to determine a proper fixed learning rate for the algorithm. In this paper, an optimized recursive algorithm is presented for online learning based on matrix operation and optimization methods analytically, which can avoid the trouble to select a proper learning rate for the gradient method. The proof of weak convergence of the proposed algorithm also is given… Expand
Designing stable neural identifier based on Lyapunov method
TLDR
This paper suggests adaptive gradient descent algorithm with stable learning laws for modified dynamic neural network (MDNN) and studies the stability of this algorithm. Expand
ZHANG PENG : A RECURSIVE ALGORITHM BASED ON FUZZY NEURAL NETWORK FOR TARGET
To overcome this deficiency, a compound fuzzy neural network named IMJ was introduced which combines the process’s rule-based reasoning and function approximation functions. This paper presents theExpand
A channel quality indicator (CQI) prediction scheme using feed forward neural network (FF-NN) technique for MU-MIMO LTE system
TLDR
A MU-MIMO CQI prediction scheme is recommended to improve the tradeoff between BER and SE and makes use of FF-NN algorithm to train and achieve enhanced C QI values. Expand
Cystoscopic Image Classification Based on Combining MLP and GA
TLDR
An adaptive method was presented for determining the learning rate so that the multilayer neural network could be improved and achieved a 7% decrease in error and increased the convergence speed of the proposed method in the classification of cystoscopy images, compared to the other competing methods. Expand
Similarity Calculation Algorithm for Intelligent Electronic Customer Service Problems
TLDR
Experimental results show that the dynamic weighted self-attention text similarity calculation model is superior to the existing models in terms of calculation accuracy and running time, and has certain reference value for the development of similar intelligent electronic customer service systems. Expand

References

SHOWING 1-10 OF 40 REFERENCES
A rapid learning and dynamic stepwise updating algorithm for flat neural networks and the application to time-series prediction
  • C. Chen, J. Wan
  • Computer Science, Medicine
  • IEEE Trans. Syst. Man Cybern. Part B
  • 1999
TLDR
A fast learning algorithm is proposed to find an optimal weights of the flat neural networks (especially, the functional-link network) using a linear least-square method and indicates that the proposed model is very attractive to real-time processes. Expand
On-line adaptive learning rate BP algorithm for MLP and application to an identification problem
TLDR
An on-line algorithm that uses an adaptive learning rate that is based on the analysis of the convergence of the conventional gradient descent method for three-layer BP neural networks is proposed. Expand
Nonlinear system modeling by competitive learning and adaptive fuzzy inference system
TLDR
A new adaptive fuzzy inference system, combined with a learning algorithm, is proposed to cope with problems such as the conflict between overfitting and good generalization and low reliability. Expand
A multilayer neural network with piecewise-linear structure and back-propagation learning
  • R. Batruni
  • Mathematics, Computer Science
  • IEEE Trans. Neural Networks
  • 1991
TLDR
A multilayer neural network which is given a two-layer piecewise-linear structure for every cascaded section is proposed, which specializes in functional approximation and is anticipated to have applications in control, communications, and pattern recognition. Expand
A new back-propagation algorithm with coupled neuron
  • M. Fukumi, S. Omatu
  • Computer Science, Medicine
  • International 1989 Joint Conference on Neural Networks
  • 1989
TLDR
A novel algorithm is developed for training multilayer fully connected feedforward networks of coupled neurons with both signoid and signum functions called CNR, or coupled neuron rule, which takes advantages of the key ideas of both backpropagation and MRII. Expand
A fuzzy neural network controller with adaptive learning rates for nonlinear slider-crank mechanism
TLDR
The robust control performance and learning ability of the proposed FNN controller with adaptive learning rates is demonstrated by simulation and experimental results. Expand
An on-line hybrid learning algorithm for multilayer perceptron in identification problems
TLDR
A hybrid learning algorithm for multilayered perceptrons (MLPs) and pattern-by-pattern training, based on optimized instantaneous learning rates and the recursive least squares method, is proposed, and can speed up the learning process of the MLPs substantially, while simultaneously preserving the stability of thelearning process. Expand
Convergence of gradient method with momentum for two-Layer feedforward neural networks
TLDR
This work proves the weak and strong convergence results, as well as the convergence rates for the error function and for the weight, of a gradient method with momentum for two-layer feedforward neural networks with momentum set to be a constant. Expand
Deterministic convergence of an online gradient method for BP neural networks
TLDR
This paper proves a convergence theorem for an online gradient method with variable step size for backward propagation neural networks with a hidden layer that has a deterministic and monotone nature. Expand
Optimum learning rate for backpropagation neural networks
TLDR
An optimum, time-varying learning rate for multilayer BP networks is analytically derived and results show that training time can be reduced significantly while not causing any oscillations during the training process. Expand
...
1
2
3
4
...