• Publications
  • Influence
Random Forests
  • L. Breiman
  • Mathematics, Computer Science
  • Machine Learning
  • 1 October 2001
TLDR
Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression. Expand
Classification and Regression Trees
TLDR
This chapter discusses tree classification in the context of medicine, where right Sized Trees and Honest Estimates are considered and Bayes Rules and Partitions are used as guides to optimal pruning. Expand
Bagging predictors
TLDR
Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. Expand
Statistical modeling: The two cultures
TLDR
If the goal as a field is to use data to solve problems, then the statistical community needs to move away from exclusive dependence on data models and adopt a more diverse set of tools. Expand
Bagging Predictors
  • L. Breiman
  • Computer Science
  • Machine Learning
  • 1 August 1996
TLDR
Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. Expand
Estimating Optimal Transformations for Multiple Regression and Correlation.
Abstract In regression analysis the response variable Y and the predictor variables X 1 …, Xp are often replaced by functions θ(Y) and O1(X 1), …, O p (Xp ). We discuss a procedure for estimatingExpand
Better subset regression using the nonnegative garrote
A new method, called the nonnegative (nn) garrote, is proposed for doing subset regression. It both shrinks and zeroes coefficients. In tests on real and simulated data, it produces lower predictionExpand
Heuristics of instability and stabilization in model selection
In model selection, usually a best predictor is chosen from a collection {μ(.,s)} of predictors where μ(.,s) is the minimum least-squares predictor in a collection U s of predictors. Here s is aExpand
...
1
2
3
4
5
...