Walter H. Delashmit

Learn More
In order to facilitate complexity optimization in feedforward networks, several algorithms are developed that combine growing and pruning. First, a growing scheme is presented which iteratively adds new hidden units to full-trained networks. Then, a non-heuristic onepass pruning technique is presented, which utilizes orthogonal least squares. Based upon(More)
Piecewise linear networks (PLNs) are attractive because they can be trained quickly and provide good performance in many nonlinear approximation problems. Most existing design algorithms for piecewise linear networks are not convergent, non-optimal, or are not designed to handle noisy data. In this paper, four algorithms are presented which attack this(More)
Due to the chaotic nature of multilayer perceptron training, training error usually fails to be a monotonically nonincreasing function of the number of hidden units. New training algorithms are developed where weights and thresholds from a well-trained smaller network are used to initialize a larger network. Methods are also developed to reduce the total(More)
In this paper, three approaches are presented for generating and validating sequences of different size neural nets. First, a growing method is given along with several weight initialization methods, and their properties. Then a one pass pruning method is presented which utilizes orthogonal least squares. Based upon this pruning approach, a onepass(More)
  • 1