The strength of weak learnability

  title={The strength of weak learnability},
  author={R. Schapire},
  journal={Machine Learning},
  • R. Schapire
  • Published 1990
  • Computer Science
  • Machine Learning
This paper addresses the problem of improving the accuracy of an hypothesis output by a learning algorithm in the distribution-free (PAC) learning model. A concept class islearnable (orstrongly learnable) if, given access to a source of examples of the unknown concept, the learner with high probability is able to output an hypothesis that is correct on all but an arbitrarily small fraction of the instances. The concept class isweakly learnable if the learner can produce an hypothesis that… Expand

Topics from this paper

On the Sample Complexity of Weakly Learning
Lower Bounds for Learning Discrete
Learning with Limited Visibility
On the Learnability of Discrete Distributions Extended Abstract
Partial Occam's Razor and its Applications
Learning by Refuting
On Restricted-Focus-of-Attention Learnability of Boolean Functions
Agnostic Learning by Refuting
A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting
On the Di cultyof Approximately Maximizing Agreements