Corpus ID: 14669208

Arcing Classifiers

  title={Arcing Classifiers},
  author={L. Breiman},
  • L. Breiman
  • Published 1998
  • Recent work has shown that combining multiple versions of unstable classifiers such as trees or neural nets results in reduced test set error. One of the more effective is bagging (Breiman [1996a]) Here, modified training sets are formed by resampling from the original training set, classifiers constructed using these training sets and then combined by voting. Freund and Schapire [1995,1996] propose an algorithm the basis of which is to adaptively resample and combine (hence the acronym-arcing… CONTINUE READING
    705 Citations

    Figures and Tables from this paper

    An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants
    • 2,335
    • Highly Influenced
    • PDF
    A new approach of Boosting using decision tree classifier for classifying noisy data
    • 5
    A variant of Rotation Forest for constructing ensemble classifiers
    • 26
    An empirical comparison of ensemble methods based on classification trees
    • 70
    • PDF
    A new ensemble classifier creation method by creating new training set for each base classifier
    • 4
    Boosting Neural Networks
    • 250
    • PDF
    A Diversity Production Approach in Ensemble of Base Classifiers
    • 1
    • PDF
    Improving the Predictive Power of AdaBoost: A Case Study in Classifying Borrowers
    • 6


    Experiments with a New Boosting Algorithm
    • 8,005
    • PDF
    Boosting Decision Trees
    • 250
    The strength of weak learnability
    • 1,369
    • PDF
    Bias Plus Variance Decomposition for Zero-One Loss Functions
    • 674
    • PDF
    Efficient Pattern Recognition Using a New Transformation Distance
    • 576
    • PDF
    Neural Networks and the Bias/Variance Dilemma
    • 3,251
    • PDF
    Cryptographic Limitations on Learning Boolean Formulae and Finite Automata
    • 598
    • PDF
    Classification and Regression Trees
    • 17,688
    • PDF