Corpus ID: 14856214

A Gradient-Based Boosting Algorithm for Regression Problems

@inproceedings{Zemel2000AGB,
  title={A Gradient-Based Boosting Algorithm for Regression Problems},
  author={R. Zemel and T. Pitassi},
  booktitle={NIPS},
  year={2000}
}
In adaptive boosting, several weak learners trained sequentially are combined to boost the overall algorithm performance. Recently adaptive boosting methods for classification problems have been derived as gradient descent algorithms. This formulation justifies key elements and parameters in the methods, all chosen to optimize a single common objective function. We propose an analogous formulation for adaptive boosting of regression problems, utilizing a novel objective function that leads to a… Expand
A geometric conversion approach for boosting regression problem
  • Peng Kou, F. Gao, Lin Gao
  • Computer Science
  • 2010 2nd International Conference on Computer Engineering and Technology
  • 2010
TLDR
This paper presents a boosting algorithm for regression in a geometric conversion approach and proves that this algorithm decreases the training error exponentially fast, and validate that the method is effective. Expand
A family of online boosting algorithms
TLDR
This paper develops a boosting framework that can be used to derive online boosting algorithms for various cost functions and presents promising results on a wide range of data sets. Expand
Experiments with AdaBoost.RT, an Improved Boosting Scheme for Regression
TLDR
A new boosting algorithm, AdaBoost.RT, is described, which requires selecting the suboptimal value of the error threshold to demarcate examples as poorly or well predicted for regression problems. Expand
Scale-Space Based Weak Regressors for Boosting
TLDR
A novel scale-space based boosting framework which applies scale- space theory for choosing the optimal regressors during the various iterations of the boosting algorithm and shows results on different real-world regression datasets. Expand
Multi-resolution Boosting for Classification and Regression Problems
TLDR
This paper proposes a novel multi- resolution approach for choosing the weak learners during additive modeling and applies insights from multi-resolution analysis and chooses the optimal learners at multiple resolutions during different iterations of the boosting algorithms. Expand
Boosting and instability for regression trees
TLDR
The AdaBoost like algorithm for boosting CART regression trees is considered, the ability of boosting to track outliers and to concentrate on hard observations is used to explore a non-standard regression context. Expand
Multi-resolution boosting for classification and regression problems
TLDR
This paper proposes a novel multi-resolution approach for choosing the weak learners during additive modeling and applies insights from multi- resolution analysis and chooses the optimal learners at multiple resolutions during different iterations of the boosting algorithms, which are simple yet powerful additive modeling methods. Expand
Boosting regression methods based on a geometric conversion approach: Using SVMs base learners
TLDR
A new approach to extending boosting to regression is proposed that converts a regression sample to a binary classification sample from a geometric point of view, and performs AdaBoost with support vector machines base learner on the converted classification sample. Expand
Robust Regression by Boosting the Median
  • B. Kégl
  • Mathematics, Computer Science
  • COLT
  • 2003
TLDR
This paper analyzes the choice of the weighted median of base regressors and proposes a general boosting algorithm based on this approach, which proves boosting-type convergence of the algorithm and gives clear conditions for the converge of the robust training error. Expand
AdaBoost.RT: a boosting algorithm for regression problems
  • D. Solomatine, D. Shrestha
  • Computer Science
  • 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541)
  • 2004
TLDR
A boosting algorithm, AdaBoost.RT, is proposed for regression problems that requires to select the sub-optimal value of relative error threshold to demarcate predictions from the predictor as correct or incorrect. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 21 REFERENCES
Barrier Boosting
TLDR
It is shown that convergence of Boosting-type algorithms becomes simpler to prove and directions to develop further Boosting schemes are outlined, in particular a new Boosting technique for regression – "-Boost – is proposed. Expand
Greedy function approximation: A gradient boosting machine.
Function estimation/approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansionsExpand
Special Invited Paper-Additive logistic regression: A statistical view of boosting
Boosting is one of the most important recent developments in classification methodology. Boosting works by sequentially applying a classification algorithm to reweighted versions of the training dataExpand
Improved Boosting Algorithms using Confidence-Rated Predictions
We describe several improvements to Freund and Schapire‘s AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give aExpand
Improved Boosting Algorithms Using Confidence-rated Predictions
We describe several improvements to Freund and Schapire's AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give aExpand
Leveraging for Regression
TLDR
This paper examines master regression algorithms that leverage base regressors by iteratively calling them on modified samples and presents three gradient descent leveraging algorithms for regression and proves AdaBoost-style bounds on their sample error using intuitive assumptions on the base learners. Expand
A decision-theoretic generalization of on-line learning and an application to boosting
TLDR
The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and the multiplicative weightupdate Littlestone Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. Expand
Neural Network Ensembles, Cross Validation, and Active Learning
TLDR
It is shown how to estimate the optimal weights of the ensemble members using unlabeled data and how the ambiguity can be used to select new training data to be labeled in an active learning scheme. Expand
Adaptive Mixtures of Local Experts
TLDR
A new supervised learning procedure for systems composed of many separate networks, each of which learns to handle a subset of the complete set of training cases, which is demonstrated to be able to be solved by a very simple expert network. Expand
Training Products of Experts by Minimizing Contrastive Divergence
TLDR
A product of experts (PoE) is an interesting candidate for a perceptual system in which rapid inference is vital and generation is unnecessary because it is hard even to approximate the derivatives of the renormalization term in the combination rule. Expand
...
1
2
3
...