• Corpus ID: 9664198

The Condensed Nearest Neighbor Rule

@inproceedings{Hilborn1967TheCN,
  title={The Condensed Nearest Neighbor Rule},
  author={Charles G. Hilborn and Demetrios G. Lainiotis},
  year={1967}
}
Since, by (8) pertaining to the nearest neighbor decision rule (NN rule). We briefly review the NN rule and then describe the CNN rule. The NN rule['l-[ " I assigns an unclassified sample to the same class as the nearest of n stored, correctly classified samples. In other words, given a collection of n reference points, each classified by some external source, a new point is assigned to the same class as its nearest neighbor. The most interesting t)heoretical property of the NN rule is that… 

Figures from this paper

Fast Nearest Neighbor Condensation for Large Data Sets Classification

  • F. Angiulli
  • Computer Science
    IEEE Transactions on Knowledge and Data Engineering
  • 2007
The fast condensed nearest neighbor (FCNN) rule was three orders of magnitude faster than hybrid instance-based learning algorithms on the MNIST and Massachusetts Institute of Technology Face databases and computed a model of accuracy comparable to that of methods incorporating a noise-filtering pass.

On Optimizing Locally Linear Nearest Neighbour Reconstructions Using Prototype Reduction Schemes

By completely discarding the points not included by the PRS, a reduced set of sample points are obtained, using which the quadratic optimization problem can be computed far more expediently, and the values of the corresponding indices are comparable to those obtained with the original training set.

- 1-APPLICATION OF PROXIMITY GRAPHS TO EDITING NEAREST NEIGHBOR DECISION RULES *

Several geometric methods based on proximity graphs are proposed for editing the training data for use in the nearest neighbor (NN) rule and one of the methods yields a decision-boundary consistent edited set and therefore a decision rule that preserves all the desirable convergence properties of the NN-rule that is based on the original entire training data.

Bayesian instance selection for the nearest neighbor rule

The study shows that Eva outputs smaller and more reliable sets of instances, in a competitive time, while preserving the predictive accuracy of the related classifier.

Condensed Nearest Neighbor Data Domain Description

  • F. Angiulli
  • Computer Science
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2007
This work investigates the effect of using a subset of the original data set as the reference set of the classifier, and introduces the concept of a reference-consistent subset, and shows that finding the minimum-cardinality reference- Consistent subset is intractable.

Inducing NNC-trees with the R/sup 4/-rule

  • Qiangfu Zhao
  • Computer Science
    IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)
  • 2005
This paper proposes an algorithm for inducing NNC-Trees based on the R/sup 4/-rule, which was proposed by the author for finding the smallest nearest neighbor based multilayer perceptrons (NN-MLPs).

Pruning Nearest Neighbor Competence Preservation Learners

  • F. AngiulliEstela Narvaez
  • Computer Science
    2015 IEEE 27th International Conference on Tools with Artificial Intelligence (ICTAI)
  • 2015
This study investigates the application of the Pessimistic Error Estimate principle in the context of the nearest neighbor classification rule and shows that a PEE-like selection strategy guarantees to preserve the accuracy of the consistent subset with a far larger reduction factor and that sensible generalization improvements can be obtained by using a reduced subset of intermediate size.

An instance selection algorithm for fuzzy K-nearest neighbor

A condensed fuzzy K-nearest neighbor (CFKNN) algorithm that starts from an initial instance set S and iteratively selects informative instances from training set T, moving them from T to S.

Boosting k-NN for Categorization of Natural Scenes

A novel boosting approach for generalizing the k-NN rule, by providing a new k-nn boosting algorithm, called UNN (Universal Nearest Neighbors), for the induction of leveragedk-NN, which displays the ability of UNN to compete with or beat the other contenders, while achieving comparatively small training and testing times.

A Novel Template Reduction Approach for the $K$-Nearest Neighbor Method

Experiments show that the proposed approach effectively reduces the number of prototypes while maintaining the same level of classification accuracy as the traditional KNN, and is a simple and a fast condensing algorithm.
...

References

SHOWING 1-6 OF 6 REFERENCES

An asymptotic analysis of the nearest-neighbor decision rule

  • An asymptotic analysis of the nearest-neighbor decision rule
  • 1966

Eatnnation by the nearest-neighbor rule Performance and implementation of k-nearest neighbor decision rule wth incorrectly identified training srtmples

  • Proc. 4th Allerton Conf. Circuit and System Theory
  • 1966

An experimental investigation of a mixedfont print recognition 8y&m

  • EC-15, 161 N. J. Nilsson, Learning Machines-Foundations of Trainable Pattern Class-ifying Systems
  • 1965

An experimental comparison of several design algorithms used in pattern recognition Multifont character-recognition experiments using trainrtble classifiers

  • IBM Corp. Contract AF
  • 1965

Uncertainty and the Probability of Error Let X and Y be discrete random variables which can be thought of as the input and output, respectively, of a communication channel

  • Uncertainty and the Probability of Error Let X and Y be discrete random variables which can be thought of as the input and output, respectively, of a communication channel

Nearest-neighbor patternclassification

  • IEEE Trans. Information Theory
  • 1967