# Accelerating Gradient Boosting Machine

@article{Lu2019AcceleratingGB, title={Accelerating Gradient Boosting Machine}, author={Haihao Lu and Sai Praneeth Karimireddy and Natalia Ponomareva and Vahab S. Mirrokni}, journal={ArXiv}, year={2019}, volume={abs/1903.08708} }

Gradient Boosting Machine (GBM) is an extremely powerful supervised learning algorithm that is widely used in practice. GBM routinely features as a leading algorithm in machine learning competitions such as Kaggle and the KDDCup. In this work, we propose Accelerated Gradient Boosting Machine (AGBM) by incorporating Nesterov's acceleration techniques into the design of GBM. The difficulty in accelerating GBM lies in the fact that weak (inexact) learners are commonly used, and therefore the…

## 9 Citations

Soft Gradient Boosting Machine

- Computer ScienceArXiv
- 2020

This work proposes the soft Gradient Boosting Machine (sGBM) by wiring multiple differentiable base learners together, by injecting both local and global objectives inspired from gradient boosting, all base learners can then be jointly optimized with linear speed-up.

Gradient Boosting Machine with Partially Randomized Decision Trees

- Computer Science2021 28th Conference of Open Innovations Association (FRUCT)
- 2021

This work proposes to apply the partially randomized trees which can be regarded as a special case of the extremely randomized trees applied to the gradient boosting machine to reduce the computational complexity of the gradient boost machine.

Accelerating boosting via accelerated greedy coordinate descent

- Computer Science
- 2019

We exploit the connection between boosting and greedy coordinate optimization to produce new accelerated boosting methods. Specifically, we look at increasing block sizes, better selection rules, and…

Benchmarking Machine Learning Models to Assist in the Prognosis of Tuberculosis

- MedicineInformatics
- 2021

This work benchmarks machine learning models to aid TB prognosis using a Brazilian health database of confirmed cases and deaths related to TB in the State of Amazonas to predict the probability of death by TB thus aiding the prognosis of TB and associated treatment decision making process.

Multimodal Predictive Modeling of Endovascular Treatment Outcome for Acute Ischemic Stroke Using Machine-Learning

- MedicineStroke
- 2020

Integrative assessment of clinical, multimodal imaging, and angiographic characteristics with machine-learning allowed to accurately predict the clinical outcome following endovascular treatment for acute ischemic stroke.

Automated Residential Energy Audits Using a Smart WiFi Thermostat-Enabled Data Mining Approach

- Engineering
- 2021

This research demonstrates promise for low-cost data-based energy auditing of residences reliant upon smart WiFi thermostats and develops a machine learning model to predict attic and wall R-Values, furnace efficiency, and air conditioning seasonal energy efficiency ratio (SEER).

Latent Gaussian Model Boosting

- Computer ScienceIEEE transactions on pattern analysis and machine intelligence
- 2022

This article introduces a novel approach that combines boosting and latent Gaussian models in order to remedy the above-mentioned drawbacks and to leverage the advantages of both techniques.

Prediction of Heart Disease with Different Attributes Combination by Data Mining Algorithms

- Computer Science
- 2021

Gaussian Process Boosting

- Computer ScienceArXiv
- 2020

An extension that scales to large data using a Vecchia approximation for the Gaussian process model relying on novel results for covariance parameter inference and obtaining increased predictive performance compared to existing approaches.

## References

SHOWING 1-10 OF 46 REFERENCES

Randomized Gradient Boosting Machine

- Computer ScienceSIAM J. Optim.
- 2020

This work proposes Randomized Gradient Boosting Machine (RGBM) which leads to substantial computational gains compared to GBM, by using a randomization scheme to reduce search in the space of weak-learners.

Accelerated proximal boosting

- Computer ScienceArXiv
- 2018

This paper proposes to build upon the proximal point algorithm when the empirical risk to minimize is not differentiable, and exhibits a favorable comparison over gradient boosting regarding convergence rate and prediction accuracy.

Accelerated gradient boosting

- Computer ScienceMachine Learning
- 2019

It is empirically shown that AGB is less sensitive to the shrinkage parameter and outputs predictors that are considerably more sparse in the number of trees, while retaining the exceptional performance of gradient boosting.

Gradient and Newton Boosting for Classification and Regression

- Computer ScienceExpert Syst. Appl.
- 2021

Boosting with early stopping: Convergence and consistency

- Computer Science
- 2005

This paper studies numerical convergence, consistency and statistical rates of convergence of boosting with early stopping, when it is carried out over the linear span of a family of basis functions, and leads to a rigorous proof that for a linearly separable problem, AdaBoost becomes an L 1 -margin maximizer when left to run to convergence.

LightGBM: A Highly Efficient Gradient Boosting Decision Tree

- Computer ScienceNIPS
- 2017

It is proved that, since the data instances with larger gradients play a more important role in the computation of information gain, GOSS can obtain quite accurate estimation of the information gain with a much smaller data size, and is called LightGBM.

XGBoost: A Scalable Tree Boosting System

- Computer ScienceKDD
- 2016

This paper proposes a novel sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning and provides insights on cache access patterns, data compression and sharding to build a scalable tree boosting system called XGBoost.

Special Invited Paper-Additive logistic regression: A statistical view of boosting

- Computer Science
- 2000

This work shows that this seemingly mysterious phenomenon of boosting can be understood in terms of well-known statistical principles, namely additive modeling and maximum likelihood, and develops more direct approximations and shows that they exhibit nearly identical results to boosting.

A New Perspective on Boosting in Linear Regression via Subgradient Optimization and Relatives

- Computer ScienceArXiv
- 2015

This paper derives novel, comprehensive computational guarantees for several boosting algorithms in linear regression by using techniques of modern first-order methods in convex optimization, and provides a precise theoretical description of the amount of data-fidelity and regularization imparted by running a boosting algorithm with a prespecified learning rate for a fixed but arbitrary number of iterations, for any dataset.

Boosting Algorithms as Gradient Descent

- Computer ScienceNIPS
- 1999

Following previous theoretical results bounding the generalization performance of convex combinations of classifiers in terms of general cost functions of the margin, a new algorithm (DOOM II) is presented for performing a gradient descent optimization of such cost functions.