• Corpus ID: 6265598

Practical Characteristics of Neural Network and Conventional Pattern Classifiers on Artificial and Speech Problems

@inproceedings{Lee1989PracticalCO,
  title={Practical Characteristics of Neural Network and Conventional Pattern Classifiers on Artificial and Speech Problems},
  author={Yuchun Lee and Richard Lippmann},
  booktitle={NIPS},
  year={1989}
}
Eight neural net and conventional pattern classifiers (Bayesian-unimodal Gaussian, k-nearest neighbor, standard back-propagation, adaptive-stepsize back-propagation, hypersphere, feature-map, learning vector quantizer, and binary decision tree) were implemented on a serial computer and compared using two speech recognition and two artificial tasks. Error rates were statistically equivalent on almost all tasks, but classifiers differed by orders of magnitude in memory requirements, training time… 

Figures from this paper

A Comparative Study of the Practical Characteristics of Neural Network and Conventional Pattern Classifiers

The results suggest that the selection of a classifier for a particular task should be guided not so much by small differences in error rate, but by practical considerations concerning memory usage, computational resources, ease of implementation, and restrictions on training and classification times.

Lippmann HMM Speech Recognition with Neural Net Discrimination *

Two approaches were explored which integrate neural net classifiers with Hidden Markov Model (HMM) speech recognizers. Both attempt to improve speech pattern discrimination while retaining the

Pattern classification using neural networks

  • R. Lippmann
  • Computer Science
    IEEE Communications Magazine
  • 1989
The author extends a previous review and focuses on feed-forward neural-net classifiers for static patterns with continuous-valued inputs, examining probabilistic, hyperplane, kernel, and exemplar classifiers.

A critical overview of neural network pattern classifiers

  • R. Lippmann
  • Computer Science
    Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop
  • 1991
Results of experiments are presented which demonstrate that neural network classifiers provide error rates which are equivalent to and sometimes lower than those of more conventional Gaussian.

Handwritten Digit Recognition Using K Nearest-Neighbor, Radial-Basis Function, and Backpropagation Neural Networks

These results on a large, high input dimensional problem demonstrate that practical constraints including training time, memory usage, and classification time often constrain classifier selection more strongly than small differences in overall error rate.

Comparison of kernel estimators, perceptrons and radial-basis functions for OCR and speech classification

It is found that perceptrons, when the architecture is suitable, generalise better than local, memory-based kernel estimators, but require a longer training and more precise computation.

Comparison of Statistical and Neural Classiiers and Their Applications to Optical Character Recognition and Speech Classiication Neural Network Systems Techniques and Applications (in Print)

The well known statistical and neural classiication techniques for two datasets of these applications are implemented and compared in terms of generalization accuracy, memory requirement and learning time.

On pattern classification using linear-output neural network classifiers

  • H. OsmanM. Fahmy
  • Computer Science
    Proceedings of 36th Midwest Symposium on Circuits and Systems
  • 1993
It is proved that, under reasonable assumptions, once training is complete, the minimized sample mean-square error equals the difference between the value of a familiar discriminant criterion evaluated in a network target subspace and its value evaluated either in the network hidden space or in a networks output subspace.

Classifying Process Behavior with Neural Networks: Strategies for Improved Training and Generalization

A network is proposed which uses Radial Basis Functions instead of sigmoid threshold units and is compared to the standard backpropagation network on an prototypical example problem.
...

References

SHOWING 1-10 OF 17 REFERENCES

Neural Net and Traditional Classifiers

It is demonstrated that two-layer perceptron classifiers trained with back propagation can form both convex and disjoint decision regions.

Pattern classification using neural networks

  • R. Lippmann
  • Computer Science
    IEEE Communications Magazine
  • 1989
The author extends a previous review and focuses on feed-forward neural-net classifiers for static patterns with continuous-valued inputs, examining probabilistic, hyperplane, kernel, and exemplar classifiers.

An adaptive training algorithm for back propagation networks

Classification and Data Analysis in Vector Spaces

Here, as in Chapters 3 and 5, we shall primarily be concerned with methods for making decisions. We shall assume that the primary pattern has already been coded to yield a vector containing numeric

Increased rates of convergence through learning rate adaptation

Classification and Regression Trees

This chapter discusses tree classification in the context of medicine, where right Sized Trees and Honest Estimates are considered and Bayes Rules and Partitions are used as guides to optimal pruning.

An Algorithm for Finding Best Matches in Logarithmic Expected Time

An algorithm and data structure are presented for searching a file containing N records, each described by k real valued keys, for the m closest matches or nearest neighbors to a given query record.

Cross‐Validatory Choice and Assessment of Statistical Predictions

SUMMARY A generalized form of the cross-validation criterion is applied to the choice and assessment of prediction using the data-analytic concept of a prescription. The examples used to illustrate

Numerical recipes