• Publications
  • Influence
A Probabilistic Theory of Pattern Recognition
The Bayes Error and Vapnik-Chervonenkis theory are applied as guide for empirical classifier selection on the basis of explicit specification and explicit enforcement of the maximum likelihood principle.
Non-Uniform Random Variate Generation
  • L. Devroye
  • Computer Science, Mathematics
  • 16 April 1986
This chapter reviews the main methods for generating random variables, vectors and processes in non-uniform random variate generation, and provides information on the expected time complexity of various algorithms before addressing modern topics such as indirectly specified distributions, random processes, and Markov chain methods.
Combinatorial methods in density estimation
A comparison of the Kernel Estimate and the Vapnik-Chervonenkis Dimension and Covering Numbers shows that the former is significantly more accurate than the latter and the latter is significantly less accurate.
Consistency of Random Forests and Other Averaging Classifiers
A number of theorems are given that establish the universal consistency of averaging rules, and it is shown that some popular classifiers, including one suggested by Breiman, are not universally consistent.
Nonparametric Density Estimation
This chapter describes the background material related to the nonparametric density estimation, taking into account only the univariate case; extending the results to cover more than one variable, however, is often a straightforward task.
A note on the height of binary search trees
It is proved that S is the saturation level of the same tree, that is, the number of full levels in the tree, and H is the height of a binary search tree constructed by standard insertions from a random permutation.
Lectures on the Nearest Neighbor Method
A wide-ranging and rigorous overview of nearest neighbor methods, one of the most important paradigms in machine learning, is presented in one self-contained volume.
On the Almost Everywhere Convergence of Nonparametric Regression Function Estimates
Let (X, Y), (X1, Y1), .•, (X,, Yn ) be independent identically distributed random vectors from R dxR, and let E(I Y 1 p) < oo for some p > 1 . We wish to estimate the regression function m(x) = E(Y I
Nonparametric density estimation : the L[1] view
Differentiation of Integrals Consistency Lower Bounds for Rates of Convergence Rates of Convergence in L1 The Automatic Kernel Estimate: L1 and Pointwise Convergence Estimates Related to the Kernel