Magnified gradient function with deterministic weight modification in adaptive learning

@article{Ng2004MagnifiedGF,
  title={Magnified gradient function with deterministic weight modification in adaptive learning},
  author={Sin Chun Ng and Chi-Chung Cheung and Shu Hung Leung},
  journal={IEEE Transactions on Neural Networks},
  year={2004},
  volume={15},
  pages={1411-1423}
}
This work presents two novel approaches, backpropagation (BP) with magnified gradient function (MGFPROP) and deterministic weight modification (DWM), to speed up the convergence rate and improve the global convergence capability of the standard BP learning algorithm. The purpose of MGFPROP is to increase the convergence rate by magnifying the gradient function of the activation function, while the main objective of DWM is to reduce the system error by changing the weights of a multilayered… CONTINUE READING