On Generalization by Neural Networks

@article{Kak1998OnGB,
  title={On Generalization by Neural Networks},
  author={Subhash C. Kak},
  journal={Inf. Sci.},
  year={1998},
  volume={111},
  pages={293-302}
}
  • S. Kak
  • Published 1 November 1998
  • Computer Science
  • Inf. Sci.

Figures from this paper

A class of instantaneously trained neural networks

  • S. Kak
  • Computer Science
    Inf. Sci.
  • 2002

Training of CC4 Neural Network with Spread Unary Coding

The modified CC4 algorithm is adapted to train the neural networks using spread unary inputs and it is shown that the number of misclassified points is not particularly sensitive to the chosen radius of generalization.

FC Networks for Prediction Applications

These generalized networks, called FC networks, are compared against Backpropagation and Radial Basis Function networks and shown to have excellent performance for prediction of time-series and pattern recognition.

Instantaneously Trained Neural Networks

This paper presents a review of instantaneously trained neural networks (ITNNs). These networks trade learning time for size and, in the basic model, a new hidden node is created for each training

A new corner classification approach to neural network training

The corner classification approach to neural network training has the excellent capability ofprescriptive learning, where the network weights areprescribed merely by inspection of the training

Delta Learning Rule for the Active Sites Model

The recently proposed Active Sites model is extended by developing a delta rule to increase memory capacity and the binary neural network is extended to a multi-level (non-binary) neural network.

Instantaneous Learning Neural Networks.

This talk discusses several types of neural networks with the instantaneous learning property, including the CC4 Corner Classification neural network, and some comparison of generalization performance is presented.

Extracting Generalized Descriptions from a Binary Feedforward Network

This study develops a technique to extract generalized descriptions from the hidden layer weights of a binary feedforward network. We extend the Boolean-like training algorithm with recurrent

The Basic Kak Neural Network with Complex Inputs

This introduction to the basic Kak network with complex inputs is being presented, which is part of a larger hierarchy of learning schemes that include quantum models.

Instantaneously trained neural networks with complex inputs

The development of the 3C algorithm is the main contribution of the thesis, which adapts the time-efficient corner classification approach to train feedforward neural networks to handle complex inputs using prescriptive learning, where the network weights are assigned simply upon examining the inputs.
...

References

SHOWING 1-10 OF 75 REFERENCES

Improving Generalization of Neural Networks Through Pruning

A technique for constructing neural network architectures with better ability to generalize is presented under the name Ockham's Razor: several networks are trained and then pruned by removing

Optimal Brain Damage

A class of practical and nearly optimal schemes for adapting the size of a neural network by using second-derivative information to make a tradeoff between network complexity and training set error is derived.

New algorithms for training feedforward neural networks

  • S. Kak
  • Computer Science
    Pattern Recognit. Lett.
  • 1994

On Hidden Nodes for Neural Nets

A proof that maximum number of separable regions (M) in the input space is a function of both H and input space dimension (d).

On training feedforward neural networks

Another corner classification algorithm presented in this paper does not require any computations to find the weights and in its basic form it does not perform generalization.

On design and evaluation of tapped-delay neural network architectures

It is shown that the generalization ability of the networks can be improved by pruning using the optimal brain damage method of Le Cun, Denker and Solla and a stop criterion is formulated using a modified version of Akaike's final prediction error estimate.

Backpropagation Applied to Handwritten Zip Code Recognition

This paper demonstrates how constraints from the task domain can be integrated into a backpropagation network through the architecture of the network, successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service.

Introduction to the theory of neural computation

This book is a detailed, logically-developed treatment that covers the theory and uses of collective computational networks, including associative memory, feed forward networks, and unsupervised learning.
...