Hongmei Shao

Learn More
In this paper, a new back propagation (BP) algorithm with adaptive momentum is proposed, where the momentum coefficient is adjusted iteratively based on the current descent direction and the weight increment in the last iteration. A convergence result of the algorithm is presented when it is used for training feed forward neural networks (FNNs) with a(More)
Penalty methods have been commonly used to improve the generalization performance of feedforward neural networks and to control the magnitude of the network weights. Weight boundedness and convergence results are presented for the batch BP algorithm with penalty for training feedforward neural networks with a hidden layer. A key point of the proofs is the(More)
In this paper, we study the convergence of an online gradient method with inner-product penalty and adaptive momentum for feedforward neural networks, assuming that the training samples are permuted stochastically in each cycle of iteration. Both two-layer and three-layer neural network models are considered, and two convergence theorems are established.(More)
Human chromosome 12 contains more than 1,400 coding genes and 487 loci that have been directly implicated in human disease. The q arm of chromosome 12 contains one of the largest blocks of linkage disequilibrium found in the human genome. Here we present the finished sequence of human chromosome 12, which has been finished to high quality and spans(More)
Gradient method is a simple and popular learning algorithm for feedforward neural network (FNN) training. Some strong convergence results for both batch and online gradient methods are established based on existing weak convergence results. In particular, it is shown that for gradient-penalty algorithms, strong convergence results are immediate consequences(More)