# A Gradient-Based Boosting Algorithm for Regression Problems

@inproceedings{Zemel2000AGB, title={A Gradient-Based Boosting Algorithm for Regression Problems}, author={R. Zemel and T. Pitassi}, booktitle={NIPS}, year={2000} }

In adaptive boosting, several weak learners trained sequentially are combined to boost the overall algorithm performance. Recently adaptive boosting methods for classification problems have been derived as gradient descent algorithms. This formulation justifies key elements and parameters in the methods, all chosen to optimize a single common objective function. We propose an analogous formulation for adaptive boosting of regression problems, utilizing a novel objective function that leads to a… Expand

#### Figures and Topics from this paper

#### 87 Citations

A geometric conversion approach for boosting regression problem

- Computer Science
- 2010 2nd International Conference on Computer Engineering and Technology
- 2010

This paper presents a boosting algorithm for regression in a geometric conversion approach and proves that this algorithm decreases the training error exponentially fast, and validate that the method is effective. Expand

A family of online boosting algorithms

- Computer Science
- 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops
- 2009

This paper develops a boosting framework that can be used to derive online boosting algorithms for various cost functions and presents promising results on a wide range of data sets. Expand

Experiments with AdaBoost.RT, an Improved Boosting Scheme for Regression

- Computer Science, Medicine
- Neural Computation
- 2006

A new boosting algorithm, AdaBoost.RT, is described, which requires selecting the suboptimal value of the error threshold to demarcate examples as poorly or well predicted for regression problems. Expand

Scale-Space Based Weak Regressors for Boosting

- Mathematics, Computer Science
- ECML
- 2007

A novel scale-space based boosting framework which applies scale- space theory for choosing the optimal regressors during the various iterations of the boosting algorithm and shows results on different real-world regression datasets. Expand

Multi-resolution Boosting for Classification and Regression Problems

- Mathematics, Computer Science
- PAKDD
- 2009

This paper proposes a novel multi- resolution approach for choosing the weak learners during additive modeling and applies insights from multi-resolution analysis and chooses the optimal learners at multiple resolutions during different iterations of the boosting algorithms. Expand

Boosting and instability for regression trees

- Computer Science, Mathematics
- Comput. Stat. Data Anal.
- 2006

The AdaBoost like algorithm for boosting CART regression trees is considered, the ability of boosting to track outliers and to concentrate on hard observations is used to explore a non-standard regression context. Expand

Multi-resolution boosting for classification and regression problems

- Mathematics, Computer Science
- Knowledge and Information Systems
- 2010

This paper proposes a novel multi-resolution approach for choosing the weak learners during additive modeling and applies insights from multi- resolution analysis and chooses the optimal learners at multiple resolutions during different iterations of the boosting algorithms, which are simple yet powerful additive modeling methods. Expand

Boosting regression methods based on a geometric conversion approach: Using SVMs base learners

- Mathematics, Computer Science
- Neurocomputing
- 2013

A new approach to extending boosting to regression is proposed that converts a regression sample to a binary classification sample from a geometric point of view, and performs AdaBoost with support vector machines base learner on the converted classification sample. Expand

Robust Regression by Boosting the Median

- Mathematics, Computer Science
- COLT
- 2003

This paper analyzes the choice of the weighted median of base regressors and proposes a general boosting algorithm based on this approach, which proves boosting-type convergence of the algorithm and gives clear conditions for the converge of the robust training error. Expand

AdaBoost.RT: a boosting algorithm for regression problems

- Computer Science
- 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541)
- 2004

A boosting algorithm, AdaBoost.RT, is proposed for regression problems that requires to select the sub-optimal value of relative error threshold to demarcate predictions from the predictor as correct or incorrect. Expand

#### References

SHOWING 1-10 OF 21 REFERENCES

Barrier Boosting

- Computer Science
- COLT
- 2000

It is shown that convergence of Boosting-type algorithms becomes simpler to prove and directions to develop further Boosting schemes are outlined, in particular a new Boosting technique for regression – "-Boost – is proposed. Expand

Greedy function approximation: A gradient boosting machine.

- Mathematics
- 2001

Function estimation/approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions… Expand

Special Invited Paper-Additive logistic regression: A statistical view of boosting

- Mathematics
- 2000

Boosting is one of the most important recent developments in classification methodology. Boosting works by sequentially applying a classification algorithm to reweighted versions of the training data… Expand

Improved Boosting Algorithms using Confidence-Rated Predictions

- Mathematics, Computer Science
- COLT
- 1998

We describe several improvements to Freund and Schapire‘s AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give a… Expand

Improved Boosting Algorithms Using Confidence-rated Predictions

- Computer Science
- COLT' 98
- 1998

We describe several improvements to Freund and Schapire's AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give a… Expand

Leveraging for Regression

- Computer Science
- COLT
- 2000

This paper examines master regression algorithms that leverage base regressors by iteratively calling them on modified samples and presents three gradient descent leveraging algorithms for regression and proves AdaBoost-style bounds on their sample error using intuitive assumptions on the base learners. Expand

A decision-theoretic generalization of on-line learning and an application to boosting

- Computer Science
- EuroCOLT
- 1995

The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and the multiplicative weightupdate Littlestone Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. Expand

Neural Network Ensembles, Cross Validation, and Active Learning

- Computer Science
- NIPS
- 1994

It is shown how to estimate the optimal weights of the ensemble members using unlabeled data and how the ambiguity can be used to select new training data to be labeled in an active learning scheme. Expand

Adaptive Mixtures of Local Experts

- Medicine, Computer Science
- Neural Computation
- 1991

A new supervised learning procedure for systems composed of many separate networks, each of which learns to handle a subset of the complete set of training cases, which is demonstrated to be able to be solved by a very simple expert network. Expand

Training Products of Experts by Minimizing Contrastive Divergence

- Mathematics, Computer Science
- Neural Computation
- 2002

A product of experts (PoE) is an interesting candidate for a perceptual system in which rapid inference is vital and generation is unnecessary because it is hard even to approximate the derivatives of the renormalization term in the combination rule. Expand