Corpus ID: 17920804

PERT – Perfect Random Tree Ensembles

@inproceedings{Cutler2001PERTP,
  title={PERT – Perfect Random Tree Ensembles},
  author={Adele Cutler and Guohua Zhao},
  year={2001}
}
Ensemble classifiers originated in the machine learning community. They work by fitting many individual classifiers and combining them by weighted or unweighted voting. The ensemble classifier is often much more accurate than the individual classifiers from which it is built. In fact, ensemble classifiers are among the most accurate general-purpose classifiers available. We introduce a new ensemble method, PERT, in which each individual classifier is a perfectly-fit classification tree with… Expand

Tables from this paper

Learning with Ensembles of Randomized Trees : New Insights
TLDR
A connection with kernel target alignment, a measure of kernel quality, is pointed out, which suggests that randomization is a way to obtain a high alignment, leading to possibly low generalization error. Expand
The Utility of Randomness in Decision Tree Ensembles
The use of randomness in constructing decision tree ensembles has drawn much attention in the machine learning community. In general, ensembles introduce randomness to generate diverse trees and inExpand
On the selection of decision trees in Random Forests
TLDR
It is shown that better subsets of decision trees can be obtained even using a sub-optimal classifier selection method, which proves that “classical” RF induction process, for which randomized trees are arbitrary added to the ensemble, is not the best approach to produce accurate RF classifiers. Expand
Extremely randomized trees
TLDR
A new tree-based ensemble method for supervised classification and regression problems that consists of randomizing strongly both attribute and cut-point choice while splitting a tree node and builds totally randomized trees whose structures are independent of the output values of the learning sample. Expand
Classifying Very-High-Dimensional Data with Random Forests of Oblique Decision Trees
TLDR
This work investigates a new approach for supervised classification with a huge number of numerical attributes and proposes a random oblique decision trees method that has significant better performance on very-high-dimensional datasets with slightly better results on lower dimensional datasets. Expand
Dynamic Nonparametric Random Forest Using Covariance
TLDR
The proposed C-DRF algorithm improves the performance of the original RF algorithm by as much as 58.68% at learning time, 47.91% at test time, and 68.06% in memory usage on average and is more efficient than the state-of-the-art RF algorithms in Network Intrusion Detection area. Expand
A Numerical Transform of Random Forest Regressors corrects Systematically-Biased Predictions
TLDR
This study finds a systematic bias in predictions from random forest models that is recapitulated in simple synthetic datasets, regardless of whether or not they include irreducible error in the data, but that models employing boosting do not exhibit this bias. Expand
A Very Simple Safe-Bayesian Random Forest
TLDR
This work demonstrates empirically that the Safe-Bayesian random forest outperforms MCMC or SMC based Bayesian decision trees in term of speed and accuracy, and achieves competitive performance to entropy or Gini optimised random forest, yet is very simple to construct. Expand
Characterization of variable importance measures derived from decision trees
In the context of machine learning, tree-based ensemble methods are common techniques used for prediction and explanation purposes in many research fields such as genetics for instance. These methodsExpand
Consistency of Random Forests and Other Averaging Classifiers
TLDR
A number of theorems are given that establish the universal consistency of averaging rules, and it is shown that some popular classifiers, including one suggested by Breiman, are not universally consistent. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 12 REFERENCES
Random Forests--random Features
Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. TheExpand
Arcing Classifiers
Recent work has shown that combining multiple versions of unstable classifiers such as trees or neural nets results in reduced test set error. One of the more effective is bagging (Breiman [1996a])Expand
Experiments with a New Boosting Algorithm
TLDR
This paper describes experiments carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems and compared boosting to Breiman's "bagging" method when used to aggregate various classifiers. Expand
Bagging predictors
TLDR
Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. Expand
An Empirical Comparison of Selection Measures for Decision-Tree Induction
One approach to induction is to develop a decision tree from a set of examples. When used with noisy rather than deterministic data, the method involves three main stages creating a complete tree a...
UCI Repository of machine learning databases
A New Perspective on Classification
A New Persp ect ive on Class ifica tion
Fast Classification Using Perfect Random Trees
  • Fast Classification Using Perfect Random Trees
  • 1999
C4.5: Programs for Empirical Learning
  • C4.5: Programs for Empirical Learning
  • 1993
A new perspective.
  • H. Farmer
  • Medicine
  • The Journal of the Florida Medical Association
  • 1988
...
1
2
...