On the Rate of Convergence of the Bagged Nearest Neighbor Estimate

@article{Biau2010OnTR,
  title={On the Rate of Convergence of the Bagged Nearest Neighbor Estimate},
  author={G{\'e}rard Biau and Fr{\'e}d{\'e}ric C{\'e}rou and Arnaud Guyader},
  journal={J. Mach. Learn. Res.},
  year={2010},
  volume={11},
  pages={687-712}
}
Bagging is a simple way to combine estimates in order to improve their performance. This method, suggested by Breiman in 1996, proceeds by resampling from the original data set, constructing a predictor from each subsample, and decide by combining. By bagging an n-sample, the crude nearest neighbor regression estimate is turned into a consistent weighted nearest neighbor regression estimate, which is amenable to statistical analysis. Letting the resampling size kn grows appropriately with n, it… CONTINUE READING

Similar Papers

Figures and Topics from this paper.

Citations

Publications citing this paper.
SHOWING 1-10 OF 34 CITATIONS

References

Publications referenced by this paper.
SHOWING 1-3 OF 3 REFERENCES

On the rate of convergence of nearest neighbor rules

York, 2007. L. Györfi
  • IEEE Transactions on Information Theory
  • 2007

On the Finite Sample Performance of the Nearest Neighbor Classifier

  • Proceedings. IEEE International Symposium on Information Theory
  • 1993