Corpus ID: 1266014

Boosting Decision Trees

@inproceedings{Drucker1995BoostingDT,
  title={Boosting Decision Trees},
  author={Harris Drucker and Corinna Cortes},
  booktitle={NIPS},
  year={1995}
}
We introduce a constructive, incremental learning system for regression problems that models data by means of locally linear experts. [...] Key Method Each expert is trained by minimizing a penalized local cross validation error using second order methods.Expand
Improved Boosting Algorithms using Confidence-Rated Predictions
We describe several improvements to Freund and Schapire‘s AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give aExpand
Improved Boosting Algorithms Using Confidence-rated Predictions
We describe several improvements to Freund and Schapire's AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give aExpand
A local boosting algorithm for solving classification problems
TLDR
A local boosting algorithm for dealing with classification tasks is proposed in this paper that in each iteration, a local error is calculated for every training instance and a function of this localerror is utilized to update the probability that the instance is selected to be part of next classifier's training set. Expand
An empirical evaluation of bagging and boosting for artificial neural networks
  • D. Opitz, R. Maclin
  • Computer Science
  • Proceedings of International Conference on Neural Networks (ICNN'97)
  • 1997
TLDR
The results indicate that the ensemble methods can indeed produce very accurate classifiers for some dataset, but that these gains may depend on aspects of the dataset. Expand
Online Ensemble Learning
  • N. Oza
  • Computer Science
  • AAAI/IAAI
  • 2000
TLDR
Online versions of the popular bagging and boosting algorithms are developed and it is shown empirically that both online algorithms converge to the same prediction performance as the batch versions and proved this convergence for online bagging (Oza 2000). Expand
The Sources of Increased Accuracy for Two Proposed Boosting Algorithms
TLDR
This study provides evidence that it may be useful to investigate families of boosting algorithms that incorporate varying levels of accuracy and diversity, so as to achieve an appropriate mix for a given task and domain. Expand
An Empirical Evaluation of Bagging and Boosting
TLDR
The results clearly show that even though Bagging almost always produces a better classifier than any of its individual component classifiers and is relatively impervious to overfitting, it does not generalize any better than a baseline neural-network ensemble method. Expand
Irical Evaluation of Agging Ad
An ensemble consists of a set of independently trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research hasExpand
Hybrid committee machine for incremental learning
  • Jian Yang, Siwei Luo
  • Computer Science
  • 2005 International Conference on Neural Networks and Brain
  • 2005
In this paper we made four modifications to incremental ensemble learning algorithm Learn++, including (1) use a self-growing dynamic committee machine generated by error correlation partition (ECP)Expand
Advances in Large Margin Classifiers
TLDR
This book provides an overview of recent developments in large margin classifiers, examines connections with other methods, and identifies strengths and weaknesses of the method, as well as directions for future research. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 15 REFERENCES
A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting
TLDR
The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and it is shown that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. Expand
A decision-theoretic generalization of on-line learning and an application to boosting
TLDR
The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and the multiplicative weightupdate Littlestone Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. Expand
Boosting and Other Ensemble Methods
TLDR
A surprising result is shown for the original boosting algorithm: namely, that as the training set size increases, the training error decreases until it asymptotes to the test error rate. Expand
Bagging predictors
TLDR
Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. Expand
Boosting a weak learning algorithm by majority
TLDR
An algorithm for improving the accuracy of algorithms for learning binary concepts by combining a large number of hypotheses, each of which is generated by training the given learning algorithm on a different set of examples, is presented. Expand
Boosting Performance in Neural Networks
TLDR
The boosting algorithm is used to construct an ensemble of neural networks that significantly improves performance (compared to a single network) in optical character recognition (OCR) problems and improved performance significantly, and, in some cases, dramatically. Expand
Multiple decision trees
TLDR
This paper describes experiments, on two domains, to investigate the effect of averaging over predictions of multiple decision trees, instead of using a single tree, finding that it is best to average across sets of trees with different structure; this usually gives better performance than any of the constituent trees, including the ID3 tree. Expand
Comparison of classifier methods: a case study in handwritten digit recognition
  • L. Bottou, Corinna Cortes, +8 authors V. Vapnik
  • Computer Science
  • Proceedings of the 12th IAPR International Conference on Pattern Recognition, Vol. 3 - Conference C: Signal Processing (Cat. No.94CH3440-5)
  • 1994
This paper compares the performance of several classifier algorithms on a standard database of handwritten digits. We consider not only raw accuracy, but also training time, recognition time, andExpand
C4.5: Progromr ForMachine Learning
  • C4.5: Progromr ForMachine Learning
  • 1993
A decisiontheoretic generalization of on - line learning and an application to boosting ” , Proceeding of the Second European Conference on Computational Learning
  • 1995
...
1
2
...