Corpus ID: 16242966

Improving Regressors using Boosting Techniques

@inproceedings{Drucker1997ImprovingRU,
  title={Improving Regressors using Boosting Techniques},
  author={Harris Drucker},
  booktitle={ICML},
  year={1997}
}
In the regression context, boosting and bagging are techniques to build a committee of regressors that may be superior to a single regressor. [...] Key Result In all cases, boosting is at least equivalent, and in most cases better than bagging in terms of prediction error.Expand
Using Boosting to prune Double-Bagging ensembles
In this paper, Boosting is used to determine the order in which base predictors are aggregated into a Double-Bagging ensemble, and a subensemble is constructed by early stopping the aggregationExpand
Boosting and instability for regression trees
TLDR
The AdaBoost like algorithm for boosting CART regression trees is considered, the ability of boosting to track outliers and to concentrate on hard observations is used to explore a non-standard regression context. Expand
Boosting Using Neural Networks
TLDR
A method to construct a committee of weak learners that lowers the error rate in classification and prediction error in regression and the importance of using separate training, validation, and test sets in order to obtain good generalisation is stressed. Expand
Boosting Methods for Regression
TLDR
This paper examines ensemble methods for regression that leverage or “boost” base regressors by iteratively calling them on modified samples and bound the complexity of the regression functions produced in order to derive PAC-style bounds on their generalization errors. Expand
Boosting for Regression Transfer
TLDR
This work introduces the first boosting-based algorithms for transfer learning that apply to regression tasks, and describes two existing classification transfer algorithms, ExpBoost and TrAdaBoost, and shows how they can be modified for regression. Expand
The State of Boosting ∗
In many problem domains, combining the predictions of several models often results in a model with improved predictive performance. Boosting is one such method that has shown great promise. On theExpand
COMBINING BAGGING , BOOSTING AND RANDOM SUBSPACE ENSEMBLES FOR REGRESSION PROBLEMS
Bagging, boosting and random subspace methods are well known re-sampling ensemble methods that generate and combine a diversity of learners using the same learning algorithm for the base-regressor.Expand
A Comparison of Model Aggregation Methods for Regression
TLDR
Experiments reveal that different types of AdaBoost algorithms require different complexities of base models, and they outperform Bagging at their best, but Bagging achieves a consistent level of success with all base model, providing a robust alternative. Expand
Boosting methodology for regression problems
TLDR
This paper develops a new boosting method for regression problems that casts the regression problem as a classification problem and applies an interpretable form of the boosted naive Bayes classifier, which induces a regression model that is shown to be expressible as an additive model. Expand
Random Subspacing for Regression Ensembles
TLDR
A novel approach to ensemble learning for regression models is presented, by combining the ensemble generation technique of random subspace method with the ensemble integration methods of Stacked Regression and Dynamic Selection, which is a more effective method than the popular ensemble methods of Bagging and Boosting. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 26 REFERENCES
Bagging predictors
TLDR
Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. Expand
Boosting Decision Trees
TLDR
A constructive, incremental learning system for regression problems that models data by means of locally linear experts that does not compete for data during learning and derives asymptotic results for this method. Expand
Boosting Performance in Neural Networks
TLDR
The boosting algorithm is used to construct an ensemble of neural networks that significantly improves performance (compared to a single network) in optical character recognition (OCR) problems and improved performance significantly, and, in some cases, dramatically. Expand
Boosting and Other Ensemble Methods
TLDR
A surprising result is shown for the original boosting algorithm: namely, that as the training set size increases, the training error decreases until it asymptotes to the test error rate. Expand
Experiments with a New Boosting Algorithm
TLDR
This paper describes experiments carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems and compared boosting to Breiman's "bagging" method when used to aggregate various classifiers. Expand
A decision-theoretic generalization of on-line learning and an application to boosting
TLDR
The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and the multiplicative weightupdate Littlestone Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. Expand
Bias, Variance , And Arcing Classifiers
TLDR
This work explores two arcing algorithms, compares them to each other and to bagging, and tries to understand how arcing works, which is more sucessful than bagging in variance reduction. Expand
Classification and Regression Trees
TLDR
This chapter discusses tree classification in the context of medicine, where right Sized Trees and Honest Estimates are considered and Bayes Rules and Partitions are used as guides to optimal pruning. Expand
OC1: A Randomized Induction of Oblique Decision Trees
TLDR
A new method that combines deterministic and randomized procedures to search for a good tree is explored, and the accuracy of the trees found with the method matches or exceeds the best results of other machine learning methods. Expand
Stacked generalization
  • D. Wolpert
  • Mathematics, Computer Science
  • Neural Networks
  • 1992
TLDR
The conclusion is that for almost any real-world generalization problem one should use some version of stacked generalization to minimize the generalization error rate. Expand
...
1
2
3
...