Bagging predictors

@article{Breiman2004BaggingP,
  title={Bagging predictors},
  author={Leo Breiman},
  journal={Machine Learning},
  year={2004},
  volume={24},
  pages={123-140}
}
  • L. Breiman
  • Published 2004
  • Computer Science
  • Machine Learning
Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor. The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a class. The multiple versions are formed by making bootstrap replicates of the learning set and using these as new learning sets. Tests on real and simulated data sets using classification and regression trees and subset selection in linear… 
An Empirical Study of Bagging Predictors for Different Learning Algorithms
TLDR
These studies assert that both stability and robustness are key requirements to ensure the high performance for building a bagging predictor and demonstrated that bagging is statistically superior to most single learners, except for KNN and Naive Bayes (NB).
Bagged ensembles with tunable parameters
TLDR
The impact tunable weighting has on the votes of each learner in an ensemble is explored and the results with pure bagging and the best known bagged ensemble method, namely, the random forest are compared.
A Robust Bagging Method Using Median as a Combination Rule
  • F. Zaman, H. Hirose
  • Computer Science
    2008 IEEE 8th International Conference on Computer and Information Technology Workshops
  • 2008
TLDR
This paper carried out experiments on several benchmark data sets and suggests that robust bagging performs quite similar compare to the standard bagging when applied to unstable base classifiers such as decision trees, but performs better when applications as Fisher linear discriminant analysis and nearest mean classifier.
A Comparison of Model Aggregation Methods for Regression
TLDR
Experiments reveal that different types of AdaBoost algorithms require different complexities of base models, and they outperform Bagging at their best, but Bagging achieves a consistent level of success with all base model, providing a robust alternative.
VoB predictors: Voting on bagging classifications
TLDR
A randomMissing value corruption based bootstrap sampling process is proposed, where the objective is to enhance the diversity of the learning sets through random missing value injection, such that base classifiers can form an accurate classifier ensemble.
Solving regression problems with rule-based ensemble classifiers
TLDR
A lightweight learning method that induces an ensemble of decision-rule solutions for regression problems that discretizes the variable by k-means clustering and solves the resultant classification problem.
Bagging Model Trees for Classification Problems
TLDR
A comparison with other well known ensembles of decision trees, on standard benchmark datasets and the performance of the proposed technique was greater in most cases.
Bagging Equalizes Influence
TLDR
Experimental evidence is provided supporting the hypothesis that bagging stabilizes prediction by equalizing the influence of training examples, and support that other resampling strategies such as half-sampling should provide qualitatively identical effects while being computationally less demanding than bootstrap sampling.
Bagging down-weights leverage points
  • Yves Grandvalet
  • Mathematics, Computer Science
    Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium
  • 2000
TLDR
Here, the local effects on leverage points are focused on, i.e., on observations whose fitted values are largely determined by the corresponding response values, and these points are shown experimentally to be down-weighted by bagging.
Pruning in Ordered Regression Bagging Ensembles
TLDR
An efficient procedure for pruning regression ensembles is introduced, which gives an approximate solution to the problem of extracting from the original ensemble the minimum error subensemble, which is proved to be NP-hard.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 15 REFERENCES
Learning classification trees
TLDR
This paper introduces Bayesian techniques for splitting, smoothing, and tree averaging, which are similar to Quinlan's information gain, while smoothing and averaging replace pruning.
Multiple decision trees
TLDR
This paper describes experiments, on two domains, to investigate the effect of averaging over predictions of multiple decision trees, instead of using a single tree, finding that it is best to average across sets of trees with different structure; this usually gives better performance than any of the constituent trees, including the ID3 tree.
Submodel selection and evaluation in regression. The X-random case
Summary Often, in a regression situation with many variables, a sequence of submodels is generated containing fewer variables by using such methods as stepwise addition or deletion of variables, or
Using the ADAP Learning Algorithm to Forecast the Onset of Diabetes Mellitus
TLDR
Testing the ability of an early neural network model, ADAP, to forecast the onset of diabetes mellitus in a high risk population of Pima Indians and comparing the results with those obtained from logistic regression and linear perceptron models using precisely the same training and forecasting sets.
Error-Correcting Output Codes: A General Method for Improving Multiclass Inductive Learning Programs
TLDR
It is demonstrated that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multiclass problems.
Classification of radar returns from the ionosphere using neural networks
TLDR
The multilayer feedforward networks (MLFN'S) outperformed the single-layer networks, achieving 1000/0 accuracy on the training set and up to 980/0 Accuracy on the testing set.
Machine Learning, Neural and Statistical Classification
Survey of previous comparisons and theoretical work descriptions of methods dataset descriptions criteria for comparison and methodology (including validation) empirical results machine learning on
Multisurface method of pattern separation for medical diagnosis applied to breast cytology.
TLDR
The diagnosis of breast cytology is used to demonstrate the applicability ofMultisurface pattern separation to medical diagnosis and decision making and it is found that this mathematical method is applicable to other medical diagnostic and decision-making problems.
Heuristics of instability in model selection
  • Annals of Statistics
  • 1994
k - dt : a multitree learning method
  • Proceedings of the Second International Workshop on Multi strategy Learning
  • 1993
...
1
2
...