Optimal and Adaptive Algorithms for Online Boosting
@inproceedings{Beygelzimer2015OptimalAA, title={Optimal and Adaptive Algorithms for Online Boosting}, author={Alina Beygelzimer and Satyen Kale and Haipeng Luo}, booktitle={ICML}, year={2015} }
We study online boosting, the task of converting any weak online learner into a strong online learner. Based on a novel and natural definition of weak online learnability, we develop two online boosting algorithms. The first algorithm is an online version of boost-by-majority. By proving a matching lower bound, we show that this algorithm is essentially optimal in terms of the number of weak learners and the sample complexity needed to achieve a specified accuracy. The second algorithm is…
Figures from this paper
67 Citations
Online multiclass boosting
- Computer ScienceNIPS
- 2017
This work defines, and justifies, a weak learning condition for online multiclass boosting that leads to an optimal boosting algorithm that requires the minimal number of weak learners to achieve a certain accuracy.
Online Gradient Boosting
- Computer ScienceNIPS
- 2015
This work gives a simpler boosting algorithm that converts a weak online learning algorithm into a strong one where the larger class of functions is the convex hull of the base class, and proves its optimality.
Online Agnostic Boosting via Regret Minimization
- Computer ScienceNeurIPS
- 2020
This work provides the first agnostic online boosting algorithm, which efficiently converts an arbitrary online convex optimizer to an online booster, thus unifying the 4 cases of statistical/online and agnostic/realizable boosting.
Online Boosting with Bandit Feedback
- Computer ScienceALT
- 2021
An efficient regret minimization method is given that has two implications: an online boosting algorithm with noisy multi-point bandit feedback, and a new projection-free online convex optimization algorithm with stochastic gradient that improves state-of-the-art guarantees in terms of efficiency.
Online Boosting Algorithms for Multi-label Ranking
- Computer ScienceAISTATS
- 2018
This work designs online boosting algorithms with provable loss bounds for multi-label ranking and designs an adaptive algorithm that does not require knowledge of the edge of the weak learners and is hence more practical.
Online Non-linear Gradient Boosting in Multi-latent Spaces
- Computer ScienceIDA
- 2018
This work proposes a new Online Non-Linear gradient Boosting algorithm where different combinations of the same set of weak classifiers in order to learn the idiosyncrasies of the target concept to expand the expressiveness of the final model.
A Boosting-like Online Learning Ensemble
- Computer Science2016 International Joint Conference on Neural Networks (IJCNN)
- 2016
BOLE was tested against the original and other modified versions of both boosting methods as well as three renowned ensembles using well-known artificial and real-world datasets and statistically surpassed the accuracies of both Boosting-like and Adaptable Diversity-based Online Boosting.
Online Multiclass Boosting with Bandit Feedback
- Computer ScienceAISTATS
- 2019
An unbiased estimate of the loss using a randomized prediction is proposed, allowing the model to update its weak learners with limited information, and it is proved that the asymptotic error bounds of the bandit algorithms exactly match their full information counterparts.
Gradient Boosting on Stochastic Data Streams
- Computer ScienceAISTATS
- 2017
This work investigates the problem of adapting batch gradient boosting for minimizing convex loss functions to online setting and presents an algorithm, Streaming Gradient Boosting (SGB), with exponential shrinkage guarantees in the number of weak learners and an adaptation of SGB to optimize non-smooth loss functions.
Boosting for Online Convex Optimization
- Computer ScienceICML
- 2021
This work considers the decision-making framework of online convex optimization with a very large number of experts, and gives an efficient boosting algorithm that guarantees nearoptimal regret against the convex hull of the base class.
References
SHOWING 1-10 OF 57 REFERENCES
An Online Boosting Algorithm with Theoretical Justifications
- Computer ScienceICML
- 2012
A novel and reasonable assumption for the online weak learner is proposed, and an online boosting algorithm with a strong theoretical guarantee is designed by adapting from the offline SmoothBoost algorithm that matches the assumption closely.
An improved boosting algorithm and its implications on learning complexity
- Computer ScienceCOLT '92
- 1992
The main result is an improvement of the boosting-by-majority algorithm, which shows that the majority rule is the optimal rule for combining general weak learners and extends the boosting algorithm to concept classes that give multi-valued labels and real-valuedlabel.
Boosting with Online Binary Learners for the Multiclass Bandit Problem
- Computer ScienceICML
- 2014
An approach is proposed that systematically converts existing online binary classifiers to promising bandit learners with strong theoretical guarantee and matches the idea of boosting, which has been shown to be powerful for batch learning as well as online learning.
Boosting a weak learning algorithm by majority
- Computer ScienceCOLT '90
- 1990
An algorithm for improving the accuracy of algorithms for learning binary concepts by combining a large number of hypotheses, each of which is generated by training the given learning algorithm on a different set of examples, is presented.
Online bagging and boosting
- Computer Science2005 IEEE International Conference on Systems, Man and Cybernetics
- 2005
This paper presents online versions of bagging and boosting that require only one pass through the training data and compares the online and batch algorithms experimentally in terms of accuracy and running time.
FilterBoost: Regression and Classification on Large Datasets
- Computer ScienceNIPS
- 2007
This work gives the first proof that the algorithm of Collins et al. is a strong PAC learner, albeit within the filtering setting, and proves more robust to noise and overfitting than batch boosters in conditional probability estimation and proves competitive in classification.
On Boosting with Polynomially Bounded Distributions
- Computer ScienceJ. Mach. Learn. Res.
- 2002
A framework is constructed which allows an algorithm to turn the distributions produced by some boosting algorithms into polynomially smooth distributions, with minimal performance loss, and demonstrates AdaBoost's application to the task of DNF learning using membership queries.
Online Learning and Online Convex Optimization
- Computer ScienceFound. Trends Mach. Learn.
- 2012
A modern overview of online learning is provided to give the reader a sense of some of the interesting ideas and in particular to underscore the centrality of convexity in deriving efficient online learning algorithms.
A decision-theoretic generalization of on-line learning and an application to boosting
- Computer ScienceEuroCOLT
- 1995
The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and the multiplicative weightupdate Littlestone Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems.
Smooth Boosting and Learning with Malicious Noise
- Computer ScienceJ. Mach. Learn. Res.
- 2003
A new smooth boosting algorithm is described which generates only smooth distributions which do not assign too much weight to any single example and can be used to construct efficient PAC learning algorithms which tolerate relatively high rates of malicious noise.