Boosting a weak learning algorithm by majority

@article{Freund1990BoostingAW,
  title={Boosting a weak learning algorithm by majority},
  author={Yoav Freund},
  journal={Inf. Comput.},
  year={1990},
  volume={121},
  pages={256-285}
}
  • Y. Freund
  • Published 1 July 1990
  • Computer Science
  • Inf. Comput.
Abstract We present an algorithm for improving the accuracy of algorithms for learning binary concepts. The improvement is achieved by combining a large number of hypotheses, each of which is generated by training the given learning algorithm on a different set of examples. Our algorithm is based on ideas presented by Schapire and represents an improvement over his results, The analysis of our algorithm provides general upper bounds on the resources required for learning in Valiant′s polynomial… 

Noise tolerant algorithms for learning and searching

A general technique is developed which allows nearly all PAC learning algorithms to be converted into highly efficient PAClearning algorithms which tolerate classification noise and malicious errors, and highly efficient algorithms for searching in the presence of linearly bounded errors are developed.

On Weak Learning

This paper presents relationships between weak learning, weak prediction, and consistency oracles and uses an algorithm to show that a concept class is polynomially learnable if and only if there is a polynomial probabilistic consistency oracle for the class.

Experiments with a New Boosting Algorithm

This paper describes experiments carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems and compared boosting to Breiman's "bagging" method when used to aggregate various classifiers.

The learning of a class of n-dimensional Boolean functions with a two input perceptron using the boosting algorithm

In this paper we present a particular implementation of the Boosting algorithm: its application to the learning of a class of boolean functions. The Boosting algorithm is that proposed by Schapire

On Boosting with Optimal Poly-Bounded Distributions

A framework is constructed which allows to bound polynomially the distributions produced by certain boosting algorithms, without significant performance loss, and turns AdaBoost into an on-line boosting algorithm (boosting "by filtering"), which can be applied to the wider range of learning problems.

An improved boosting algorithm and its implications on learning complexity

The main result is an improvement of the boosting-by-majority algorithm, which shows that the majority rule is the optimal rule for combining general weak learners and extends the boosting algorithm to concept classes that give multi-valued labels and real-valuedlabel.

A decision-theoretic generalization of on-line learning and an application to boosting

The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and it is shown that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems.

On Boosting with Polynomially Bounded Distributions

A framework is constructed which allows an algorithm to turn the distributions produced by some boosting algorithms into polynomially smooth distributions, with minimal performance loss, and demonstrates AdaBoost's application to the task of DNF learning using membership queries.

PAC Analogues of Perceptron and Winnow Via Boosting the Margin

We describe a novel family of PAC model algorithms for learning linear threshold functions. The new algorithms work by boosting a simple weak learner and exhibit sample complexity bounds remarkably

Data filtering and distribution modeling algorithms for machine learning

This thesis is concerned with the analysis of algorithms for machine learning and describes and analyses an algorithm for improving the performance of a general concept learning algorithm by selecting those labeled instances that are most informative.
...

References

SHOWING 1-4 OF 4 REFERENCES

The strength of weak learnability

  • R. Schapire
  • Computer Science
    30th Annual Symposium on Foundations of Computer Science
  • 1989
In this paper, it is shown that the two notions of learnability are equivalent, and a method is described for converting a weak learning algorithm into one that achieves arbitrarily high accuracy.

On the necessity of Occam algorithms

It is shown for many natural concept classes that the PAC-learnability of the class implies the existence of an Occam algorithm for the class, and an interpretation of these results is that for many classes, PAC- learnability is equivalent to data compression.

Learnability and the Vapnik-Chervonenkis dimension

This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.