• Corpus ID: 216641608

Asymptotic Properties of High-Dimensional Random Forests

@article{Chi2020AsymptoticPO,
  title={Asymptotic Properties of High-Dimensional Random Forests},
  author={Chien-Ming Chi and Patrick Vossler and Yingying Fan and Jinchi Lv},
  journal={arXiv: Statistics Theory},
  year={2020}
}
As a flexible nonparametric learning tool, random forest has been widely applied to various real applications with appealing empirical performance, even in the presence of high-dimensional feature space. Unveiling the underlying mechanisms has led to some important recent theoretical results on consistency under the classical setting of fixed dimensionality or for some modified version of the random forest algorithm. Yet the consistency rates of the original version of the random forest… 

Figures and Tables from this paper

Consistency of The Oblique Decision Tree and Its Random Forest

It is shown that ODT is consistent for very general regression functions as long as they are continuous, and it is proved the consistency of ODT-based random forests (ODRF) that uses either fixed-size or random-size subset of features in the features bagging.

Ensemble Projection Pursuit for General Nonparametric Regression

An ensemble procedure is proposed, hereafter referred to as ePPR, by adopting the “feature bagging” of the Random Forest, and its theoretical consistency can be proved under more general settings than RF.

Convergence Rates of Oblique Regression Trees for Flexible Function Libraries

A theoretical framework for the analysis of oblique decision trees, where the splits at each decision node occur at linear combinations of the covariates (as opposed to conventional tree constructions that force axis-aligned splits involving only a single covariate), and shows that, under suitable conditions, oblique decisions achieve similar predictive accuracy as neural networks for the same library of regression models.

Optimal Nonparametric Inference with Two-Scale Distributional Nearest Neighbors

An in-depth technical analysis of the DNN is provided and it is proved that, thanks to the use of negative weights, the two-scale DNN estimator enjoys the optimal nonparametric rate of convergence in estimating the regression function under the fourth-order smoothness condition.

Minimax Rates for High-Dimensional Random Tessellation Forests

This work shows that a large class of random forests with general split directions also achieve minimax rates in arbitrary dimension, which includes STIT forests, a generalization of Mondrian forests to arbitrary split directions, as well as random forests derived from Poisson hyperplane tessellations.

Universal Consistency of Decision Trees for High Dimensional Additive Models

This paper shows that decision trees constructed with Classification and Regression Trees (CART) methodology are universally consistent for additive models, even when the dimensionality scales

Universal Consistency of Decision Trees in High Dimensions

This paper shows that decision trees constructed with Classification and Regression Trees (CART) methodology are universally consistent in an additive model context, even when the number of predictor

References

SHOWING 1-10 OF 34 REFERENCES

Minimax optimal rates for Mondrian trees and forests

This paper studies Mondrian Forests in a batch setting and proves their consistency assuming a proper tuning of the lifetime sequence, paving the way to a refined theoretical analysis and thus a deeper understanding of these black box algorithms.

Consistency of Random Forests

A step forward in forest exploration is taken by proving a consistency result for Breiman's original algorithm in the context of additive regression models, and sheds an interesting light on how random forests can nicely adapt to sparsity.

Maxima in hypercubes

A Berry‐Esseen bound is derived, essentially of the order of the square of the standard deviation, for the number of maxima in random samples from (0,1)d, which is the first of its kind for theNumber ofmaxima in dimensions higher than two.

Random Forests

Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.

Sharp Analysis of a Simple Model for Random Forests

A historically important random forest model, where a feature is selected at random and the splits occurs at the midpoint of the node along the chosen feature, is revisited and it is shown that this rate cannot be improved in general.

Analysis of a Random Forests Model

  • G. Biau
  • Computer Science
    J. Mach. Learn. Res.
  • 2012
An in-depth analysis of a random forests model suggested by Breiman (2004), which is very close to the original algorithm, and shows in particular that the procedure is consistent and adapts to sparsity, in the sense that its rate of convergence depends only on the number of strong features and not on how many noise variables are present.

Analyzing CART.

This paper aims to study the statistical properties of regression trees constructed with CART and finds that the training error is governed by Pearson's correlation between the optimal decision stump and response data in each node, which is bound by solving a quadratic program.

Ensemble Trees and CLTs: Statistical Inference for Supervised Learning

This paper develops formal statistical inference procedures for machine learning ensemble methods by considering predicting by averaging over trees built on subsamples of the training set and demonstrating that the resulting estimator takes the form of a U-statistic.

Sure independence screening for ultrahigh dimensional feature space Discussion

Probability Inequalities for Sums of Bounded Random Variables

If S is a random variable with finite rnean and variance, the Bienayme-Chebyshev inequality states that for x > 0, $$\Pr \left[ {\left| {S - ES} \right| \geqslant x{{{(\operatorname{var}