• Corpus ID: 6618074

Optimal and Adaptive Algorithms for Online Boosting

@inproceedings{Beygelzimer2015OptimalAA,
  title={Optimal and Adaptive Algorithms for Online Boosting},
  author={Alina Beygelzimer and Satyen Kale and Haipeng Luo},
  booktitle={ICML},
  year={2015}
}
We study online boosting, the task of converting any weak online learner into a strong online learner. Based on a novel and natural definition of weak online learnability, we develop two online boosting algorithms. The first algorithm is an online version of boost-by-majority. By proving a matching lower bound, we show that this algorithm is essentially optimal in terms of the number of weak learners and the sample complexity needed to achieve a specified accuracy. The second algorithm is… 

Figures from this paper

Online multiclass boosting
TLDR
This work defines, and justifies, a weak learning condition for online multiclass boosting that leads to an optimal boosting algorithm that requires the minimal number of weak learners to achieve a certain accuracy.
Online Gradient Boosting
TLDR
This work gives a simpler boosting algorithm that converts a weak online learning algorithm into a strong one where the larger class of functions is the convex hull of the base class, and proves its optimality.
Online Agnostic Boosting via Regret Minimization
TLDR
This work provides the first agnostic online boosting algorithm, which efficiently converts an arbitrary online convex optimizer to an online booster, thus unifying the 4 cases of statistical/online and agnostic/realizable boosting.
Online Boosting with Bandit Feedback
TLDR
An efficient regret minimization method is given that has two implications: an online boosting algorithm with noisy multi-point bandit feedback, and a new projection-free online convex optimization algorithm with stochastic gradient that improves state-of-the-art guarantees in terms of efficiency.
Online Boosting Algorithms for Multi-label Ranking
TLDR
This work designs online boosting algorithms with provable loss bounds for multi-label ranking and designs an adaptive algorithm that does not require knowledge of the edge of the weak learners and is hence more practical.
Online Non-linear Gradient Boosting in Multi-latent Spaces
TLDR
This work proposes a new Online Non-Linear gradient Boosting algorithm where different combinations of the same set of weak classifiers in order to learn the idiosyncrasies of the target concept to expand the expressiveness of the final model.
A Boosting-like Online Learning Ensemble
TLDR
BOLE was tested against the original and other modified versions of both boosting methods as well as three renowned ensembles using well-known artificial and real-world datasets and statistically surpassed the accuracies of both Boosting-like and Adaptable Diversity-based Online Boosting.
Online Multiclass Boosting with Bandit Feedback
TLDR
An unbiased estimate of the loss using a randomized prediction is proposed, allowing the model to update its weak learners with limited information, and it is proved that the asymptotic error bounds of the bandit algorithms exactly match their full information counterparts.
Gradient Boosting on Stochastic Data Streams
TLDR
This work investigates the problem of adapting batch gradient boosting for minimizing convex loss functions to online setting and presents an algorithm, Streaming Gradient Boosting (SGB), with exponential shrinkage guarantees in the number of weak learners and an adaptation of SGB to optimize non-smooth loss functions.
Boosting for Online Convex Optimization
TLDR
This work considers the decision-making framework of online convex optimization with a very large number of experts, and gives an efficient boosting algorithm that guarantees nearoptimal regret against the convex hull of the base class.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 57 REFERENCES
An Online Boosting Algorithm with Theoretical Justifications
TLDR
A novel and reasonable assumption for the online weak learner is proposed, and an online boosting algorithm with a strong theoretical guarantee is designed by adapting from the offline SmoothBoost algorithm that matches the assumption closely.
An improved boosting algorithm and its implications on learning complexity
TLDR
The main result is an improvement of the boosting-by-majority algorithm, which shows that the majority rule is the optimal rule for combining general weak learners and extends the boosting algorithm to concept classes that give multi-valued labels and real-valuedlabel.
Boosting with Online Binary Learners for the Multiclass Bandit Problem
TLDR
An approach is proposed that systematically converts existing online binary classifiers to promising bandit learners with strong theoretical guarantee and matches the idea of boosting, which has been shown to be powerful for batch learning as well as online learning.
Boosting a weak learning algorithm by majority
TLDR
An algorithm for improving the accuracy of algorithms for learning binary concepts by combining a large number of hypotheses, each of which is generated by training the given learning algorithm on a different set of examples, is presented.
Online bagging and boosting
TLDR
This paper presents online versions of bagging and boosting that require only one pass through the training data and compares the online and batch algorithms experimentally in terms of accuracy and running time.
FilterBoost: Regression and Classification on Large Datasets
TLDR
This work gives the first proof that the algorithm of Collins et al. is a strong PAC learner, albeit within the filtering setting, and proves more robust to noise and overfitting than batch boosters in conditional probability estimation and proves competitive in classification.
On Boosting with Polynomially Bounded Distributions
TLDR
A framework is constructed which allows an algorithm to turn the distributions produced by some boosting algorithms into polynomially smooth distributions, with minimal performance loss, and demonstrates AdaBoost's application to the task of DNF learning using membership queries.
Online Learning and Online Convex Optimization
TLDR
A modern overview of online learning is provided to give the reader a sense of some of the interesting ideas and in particular to underscore the centrality of convexity in deriving efficient online learning algorithms.
A decision-theoretic generalization of on-line learning and an application to boosting
TLDR
The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and the multiplicative weightupdate Littlestone Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems.
Smooth Boosting and Learning with Malicious Noise
TLDR
A new smooth boosting algorithm is described which generates only smooth distributions which do not assign too much weight to any single example and can be used to construct efficient PAC learning algorithms which tolerate relatively high rates of malicious noise.
...
1
2
3
4
5
...