• Corpus ID: 245424756

Learning with distributional inverters

@article{Binnendyk2021LearningWD,
  title={Learning with distributional inverters},
  author={Eric Binnendyk and Marco Carmosino and Antonina Kolokolova and Ramyaa and Manuel Sabin},
  journal={ArXiv},
  year={2021},
  volume={abs/2112.12340}
}
We generalize the “indirect learning” technique of Furst et al. (1991) to reduce from learning a concept class over a samplable distribution µ to learning the same concept class over the uniform distribution. The reduction succeeds when the sampler for µ is both contained in the target concept class and efficiently invertible in the sense of Impagliazzo and Luby (1989). We give two applications. • We show that AC 0 [ q ] is learnable over any succinctly-described product distribution. AC 0 [ q… 
1 Citations

Learning algorithms versus automatability of Frege systems

This work connects learning algorithms and algorithms automating proof search in propositional proof systems and proves that the following statements are equivalent, Provable learning.

References

SHOWING 1-10 OF 37 REFERENCES

Learning Algorithms from Natural Proofs

It is argued that a natural proof of a circuit lower bound against any (sufficiently powerful) circuit class yields a learning algorithm for the same circuit class.

Agnostic Learning from Tolerant Natural Proofs

It is shown that if a natural property is useful also against functions that are close to the class of "easy" functions, rather than just against "easy) functions, then it can be used to get an agnostic learning algorithm over the uniform distribution with membership queries.

On the learnability of discrete distributions

A new model of learning probability distributions from independent draws is introduced, inspired by the popular Probably Approximately Correct (PAC) model for learning boolean functions from labeled examples, in the sense that it emphasizes efficient and approximate learning, and it studies the learnability of restricted classes of target distributions.

NP-hardness of circuit minimization for multi-output functions

This work establishes the first NP-hardness result for circuit minimization of total functions in the setting of general (unrestricted) Boolean circuits, and shows that computing the minimum circuit size of a given multi-output Boolean function f is NP- hard under many-one polynomial-time randomized reductions.

The Power of Natural Properties as Oracles

The results are interpreted as providing some evidence that MCSP may be NP-hard under randomized polynomial-time reductions.

On the (Non) NP-Hardness of Computing Circuit Complexity

It is shown that MCSP is provably not NP-hard under O(n1/2-e)-time projections, and it is proved that the Σ2P-hardness of NMCSP, even under arbitrary polynomial-time reductions, would imply EXP ⊄ P/poly.

Sample compression schemes for VC classes

It is shown that every concept class C with VC dimension d has a sample compression scheme of size exponential in d, and an approximate minimax phenomenon for binary matrices of low VC dimension is used, which may be of interest in the context of game theory.

Number-theoretic constructions of efficient pseudo-random functions

A new construction of pseudo-random functions such that computing their value at any given point involves two multiple products, much more efficient than previous proposals.

Learning Quickly When Irrelevant Attributes Abound: A New Linear-Threshold Algorithm

  • N. Littlestone
  • Computer Science
    28th Annual Symposium on Foundations of Computer Science (sfcs 1987)
  • 1987
This work presents one such algorithm that learns disjunctive Boolean functions, along with variants for learning other classes of Boolean functions.

Algebraic methods in the theory of lower bounds for Boolean circuit complexity

It is proved that depth k circuits with gates NOT, OR and MODp where p is a prime require Exp(&Ogr;(n1/2k)) gates to calculate MODr functions for any r ≠ pm.