- Full text PDF available (3)
Penalty methods have been commonly used to improve the generalization performance of feedforward neural networks and to control the magnitude of the network weights. Weight boundedness and convergence results are presented for the batch BP algorithm with penalty for training feedforward neural networks with a hidden layer. A key point of the proofs is the… (More)
In this paper, a new back propagation (BP) algorithm with adaptive momentum is proposed, where the momentum coefficient is adjusted iteratively based on the current descent direction and the weight increment in the last iteration. A convergence result of the algorithm is presented when it is used for training feed forward neural networks (FNNs) with a… (More)
Human chromosome 12 contains more than 1,400 coding genes and 487 loci that have been directly implicated in human disease. The q arm of chromosome 12 contains one of the largest blocks of linkage disequilibrium found in the human genome. Here we present the finished sequence of human chromosome 12, which has been finished to high quality and spans… (More)
Gradient method is a simple and popular learning algorithm for feedforward neural network (FNN) training. Some strong convergence results for both batch and online gradient methods are established based on existing weak convergence results. In particular, it is shown that for gradient-penalty algorithms, strong convergence results are immediate consequences… (More)
Convergence results are presented for the batch backpropagation algorithm with variable learning rates for training feedforward neural networks with a hidden layer. The monotonicity of the error function in the training iteration is also proved.
Online gradient algorithm has been widely used as a learning algorithm for feedforward neural networks training. Penalty is a common and popular method for improving the generalization performance of networks. In this paper, a convergence theorem is proved for the online gradient learning algorithm with penalty, a term proportional to the magnitude of the… (More)
In this paper, a squared penalty term is added to the conventional error function to improve the generalization of neural networks. A weight boundedness theorem and two convergence theorems are proved for the gradient learning algorithm with penalty when it is used for training a two-layer feedforward neural network. To illustrate above theoretical… (More)