#### Filter Results:

- Full text PDF available (43)

#### Publication Year

1984

2015

- This year (0)
- Last 5 years (6)
- Last 10 years (10)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

The methodology used to construct tree structured rules is the focus of this monograph. Unlike many other statistical procedures, which moved from pencil and paper to calculators, this texts use of trees was unthinkable before computers. Both the practical and theoretical sides have been developed in the authors study of tree methods. Classification and… (More)

- Leo Breiman
- Machine Learning
- 2001

Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree… (More)

- Leo Breiman
- Machine Learning
- 1996

Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor. The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a class. The multiple versions are formed by making bootstrap replicates of the learning set and using these as… (More)

- Leo Breiman
- 1996

Recent work has shown that combining multiple versions of unstable classifiers such as trees or neural nets results in reduced test set error. To study this, the concepts of bias and variance of a classifier are defined. Unstable classifiers can have universally low bias. Their problem is high variance. Combining multiple versions is a variance reducing… (More)

- Leo Breiman
- Neural Computation
- 1999

The theory behind the success of adaptive reweighting and combining algorithms (arcing) such as Adaboost (Freund and Schapire [1995, 1996a]) and others in reducing generalization error has not been well understood. By formulating prediction as a game where one player makes a selection from instances in the training set and the other a convex linear… (More)

- Leo Breiman
- 2001

There are two cultures in the use of statistical modeling to reach conclusions from data. One assumes that the data are generated by a given stochastic data model. The other uses algorithmic models and treats the data mechanism as unknown. The statistical community has been committed to the almost exclusive use of data models. This commitment has led to… (More)

- Leo Breiman
- Machine Learning
- 1996

Stacking regressions is a method for forming linear combinations of different predictors to give improved prediction accuracy. The idea is to use cross-validation data and least squares under non negativity constraints to determine the coefficients in the combination. Its effectiveness is demonstrated in stacking regression trees of different sizes and in a… (More)

- Leo Breiman
- 1998

Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and… (More)

- Leo Breiman
- 1996

Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor. The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a class. The multiple versions are formed by making bootstrap replicates of the learning set and using these as… (More)

- Leo Breiman
- IEEE Trans. Information Theory
- 1993

A hinge function y=h(:r:) consists of two hyperplanes continuously joined together at a hinge. In regression (predic tion), classification (pattern recognition), and noiseless function approximation, use of sums of hinge functions gives a powerful and efficient alternative to neural networks with compute times several orders of mdgriltude less than fitting… (More)