Learn More
A gradient method with momentum for two-layer feedforward neural networks is considered. The learning rate is set to be a constant and the momentum factor an adaptive variable. Both the weak and strong convergence results are proved, as well as the convergence rates for the error function and for the weight. Compared to the existing convergence results, our(More)
In this paper, a new back propagation (BP) algorithm with adaptive momentum is proposed, where the momentum coefficient is adjusted iteratively based on the current descent direction and the weight increment in the last iteration. A convergence result of the algorithm is presented when it is used for training feed forward neural networks (FNNs) with a(More)
In this paper, a squared penalty term is added to the conventional error function to improve the generalization of neural networks. A weight boundedness theorem and two convergence theorems are proved for the gradient learning algorithm with penalty when it is used for training a two-layer feedforward neural network. To illustrate above theoretical(More)
BACKGROUND Chemotherapy-induced peripheral neuropathy (CIPN) seriously affects the quality of life of patients with multiple myeloma (MM) as well as the response rate to chemotherapy. Acupuncture has a potential role in the treatment of CIPN, but at present there have been no randomized clinical research studies to analyze the effectiveness of acupuncture(More)