• Corpus ID: 14780429

Predicting with Distributions

@article{Kearns2016PredictingWD,
  title={Predicting with Distributions},
  author={Michael Kearns and Zhiwei Steven Wu},
  journal={ArXiv},
  year={2016},
  volume={abs/1606.01275}
}
We consider a new learning model in which a joint distribution over vector pairs $(x,y)$ is determined by an unknown function $c(x)$ that maps input vectors $x$ not to individual outputs, but to entire {\em distributions\/} over output vectors $y$. Our main results take the form of rather general reductions from our model to algorithms for PAC learning the function class and the distribution class separately, and show that virtually every such combination yields an efficient algorithm in our… 
4 Citations

Figures from this paper

Using consensus mapping methods as an efficient way of depicting avian distributions in the Caatinga Dry Forest, a poorly known Neotropical biome

Mapping species distributions has become central for biodiversity research. Different mapping methods, however, may result in dramatically different spatial patterns. We used expert-drawn maps

Patterns of livestock depredation by snow leopards and effects of intervention strategies: lessons from the Nepalese Himalaya

ABSTRACT Context. Large carnivores are increasingly threatened by anthropogenic activities, and their protection is among the main goals of biodiversity conservation. The snow leopard (Panthera

Cameroon’s adaptation to climate change and sorghum productivity

Abstract Cameroon’s semi-arid zone (northern and far northern regions) is an important part of the local ecosystem that is vulnerable to climate change. This vulnerability raises concerns about rural

References

SHOWING 1-10 OF 25 REFERENCES

On the learnability of discrete distributions

A new model of learning probability distributions from independent draws is introduced, inspired by the popular Probably Approximately Correct (PAC) model for learning boolean functions from labeled examples, in the sense that it emphasizes efficient and approximate learning, and it studies the learnability of restricted classes of target distributions.

Efficient noise-tolerant learning from statistical queries

This paper formalizes a new but related model of learning from statistical queries, and demonstrates the generality of the statistical query model, showing that practically every class learnable in Valiant’s model and its variants can also be learned in the new model (and thus can be learning in the presence of noise).

Learning From Noisy Examples

This paper shows that when the teacher may make independent random errors in classifying the example data, the strategy of selecting the most consistent rule for the sample is sufficient, and usually requires a feasibly small number of examples, provided noise affects less than half the examples on average.

Learning mixtures of arbitrary gaussians

This paper presents the first algorithm that provably learns the component gaussians in time that is polynomial in the dimension.

Learning mixtures of product distributions over discrete domains

This work gives a poly(n//spl epsi/) time algorithm for learning a mixture of k arbitrary product distributions over the n-dimensional Boolean cube to accuracy to prove that no polynomial time algorithm can succeed when k is superconstant.

PAC Learning with Constant-Partition Classification Noise and Applications to Decision Tree Induction

A new model of noise called constant-partition classification noise (CPCN) is introduced which generalizes the standard model of classification noise to allow different examples to have different rates of random misclassification.

Combining labeled and unlabeled data with co-training

A PAC-style analysis is provided for a problem setting motivated by the task of learning to classify web pages, in which the description of each example can be partitioned into two distinct views, to allow inexpensive unlabeled data to augment, a much smaller set of labeled examples.

PAC Learning Axis-Aligned Mixtures of Gaussians with No Separation Assumption

A new vantage point for the learning of mixtures of Gaussians is proposed: namely, the PAC-style model of learning probability distributions introduced by Kearns et al.

The Strength of Weak Learnability

In this paper, a method is described for converting a weak learning algorithm into one that achieves arbitrarily high accuracy, and it is shown that these two notions of learnability are equivalent.