Nearest neighbor pattern classification

@article{Cover1967NearestNP,
  title={Nearest neighbor pattern classification},
  author={Thomas M. Cover and Peter E. Hart},
  journal={IEEE Trans. Inf. Theory},
  year={1967},
  volume={13},
  pages={21-27}
}
  • T. Cover, P. Hart
  • Published 1967
  • Mathematics, Computer Science
  • IEEE Trans. Inf. Theory
The nearest neighbor decision rule assigns to an unclassified sample point the classification of the nearest of a set of previously classified points. This rule is independent of the underlying joint distribution on the sample points and their classifications, and hence the probability of error R of such a rule must be at least as great as the Bayes probability of error R^{\ast} --the minimum probability of error over all decision rules taking underlying probability structure into account… Expand

Figures from this paper

Convergence of the nearest neighbor rule
  • T. Wagner
  • Mathematics, Computer Science
  • IEEE Trans. Inf. Theory
  • 1971
TLDR
It is shown that when the samples lie in n -dimensional Euclidean space, the probability of error for the NNR conditioned on the n known samples converges to R with probability 1 for mild continuity and moment assumptions on the class densities. Expand
Kn -nearest Neighbor Classification
  • M. Goldstein
  • Mathematics, Computer Science
  • IEEE Trans. Inf. Theory
  • 1972
TLDR
Some asymptotic properties of the k_n nearest neighbor classification rule are studied including an expression for a consistent upper bound on the probability of misclassification. Expand
Bounds on the Classification Error of the Nearest Neighbor Rule
TLDR
The classification error Ps of the nearest neighbor rule given a sample set S is shown to be bounded from below by the Bayes error and from above by a bound that is equal to that derived for P plus a term that depends on S and the continuity of the likelihood functions. Expand
The Nearest Neighbor Classification Rule with a Reject Option
  • M. Hellman
  • Mathematics, Computer Science
  • IEEE Trans. Syst. Sci. Cybern.
  • 1970
TLDR
Here the (k,k?) nearest neighbor rule with a reject option is examined, which looks at the k nearest neighbors and rejects if less than k? of these are from the same class; if k? or more are from one class, a decision is made in favor of that class. Expand
K Nearest Neighbor Equality: Giving equal chance to all existing classes
TLDR
The suitability of the k-NNE algorithm is empirically shown, and its effectiveness suggests that it could be added to the current list of distance based classifiers. Expand
Choice of neighbor order in nearest-neighbor classification
The kth-nearest neighbor rule is arguably the simplest and most intuitively appealing nonparametric classification procedure. However, application of this method is inhibited by lack of knowledgeExpand
CHOICE OF NEIGHBOUR ORDER FOR NEAREST-NEIGHBOUR CLASSIFICATION RULE
The kth-nearest neighbour rule is arguably the simplest and most intuitively appealing nonparametric classification procedure. However, application of this method is inhibited by lack of knowledgeExpand
Supervised Learning: Practice and Theory of Classification with the k -NN Rule
The k-NN classification rule labels a new observation query q from a test set by choosing the dominant class among the k nearest neighbors of q in the training set. One evaluates the performance of aExpand
Error and Reject Tradeoff for Nearest Neighbor Decision Rules
In a highly influencial paper, [1], Cover and Hart pointed out that, in the classification problem, there are two extremes of knowledge which the statistician may possess. In the first place, he mayExpand
Nearest Local Hyperplane Rules for Pattern Classification
TLDR
This paper introduces a new way of NLH classification that has two advantages over the original NLH algorithm: first, it preserves the zero asymptotic risk property of NN classifiers in the separable case, and second, it usually provides better finite sample performance. Expand
...
1
2
3
4
5
...

References

SHOWING 1-9 OF 9 REFERENCES
The magical number seven plus or minus two: some limits on our capacity for processing information.
TLDR
The theory provides us with a yardstick for calibrating the authors' stimulus materials and for measuring the performance of their subjects, and the concepts and measures provided by the theory provide a quantitative way of getting at some of these questions. Expand
The Information of Elementary Auditory Displays
Whereas the ear's sensitivity for detecting a difference in frequency between two tones is remarkably acute, the ability of listeners to identify (and name) tones presented in isolation is relativelyExpand
Army Electronics Command under Contract DA28-043-AMC-01764(E) and by USAF under Contract AF49(638)1517; and at the Stanford
  • Research Institute,
  • 1966
Approximate formulas for the information transmitted bv a discrete communicat ion channel.
  • M.I.T. Electronics Research Lab., Cambridge, Mass. Qua&. Progress Rept
  • 1965
Army Electronics Research and Development, Lab. under Contract DA-36-039-SC90742
  • March 29
  • 1963
Statistical methods for pattern classification,
  • Philco Rept.,
  • 1963
Discriminatory analysis, nonparametric discrimination,” USAF School of Aviation Medivine, Randolph Field, Tex
  • Project 21-49-004,
  • 1951