Zhiqiong Shao

Learn More
h this paper, we prove that the online gradient method for continuous perceptrons converges in finite steps when the training patterns are linearly separable. @ 2003 Elsevier Ltd. All rights reserved. Neural networks have been widely used for solving supervised classification problems. In this paper, we consider the simplest feedforword neural network-the(More)
In this paper, we present a new training algorithm for a fuzzy perceptron. In the case where the dimension of the input vectors is two and the training examples are separable, we can prove a finite convergence, i.e., the training procedure for the network weights will stop after finite steps. When the dimension is greater than two, stronger conditions are(More)
A survey is presented on some recent developments on the convergence of online gradient methods for feedforward neural networks such as BP neural networks. Unlike most of the convergence results which are of probabilistic and non-monotone nature, the convergence results we show here have a deterministic and monotone nature. Also considered are the cases(More)
  • 1