## Structural adaptation for sparsely connected MLP using Newton's method

- Parastoo Kheirkhah, Kanishka Tyagi, Son Nguyen, Michael T. Manry
- 2017 International Joint Conference on Neural…
- 2017

1 Excerpt

- Published 2010

A batch training algorithm for feed-forward networks is proposed which uses Newton’s method to estimate a vector of optimal learning factors, one for each hidden unit. Backpropagation, using this learning factor vector, is used to modify the hidden unit’s input weights. Linear equations are then solved for the network’s output weights. Elements of the new method’s Gauss-Newton Hessian matrix are shown to be weighted sums of elements from the total network’s Hessian. In several examples, the new method performs better than backpropagation and conjugate gradient, with similar numbers of required multiplies. The method performs as well as or better than Levenberg-Marquardt, with several orders of magnitude fewer multiplies due to the small size of its Hessian.

@inproceedings{Malalur2010MultipleOL,
title={Multiple optimal learning factors for feed-forward networks},
author={Sanjeev S. Malalur and Michael T. Manry},
year={2010}
}