A theory of the learnable

@inproceedings{Valiant1984ATO,
  title={A theory of the learnable},
  author={Leslie G. Valiant},
  booktitle={STOC '84},
  year={1984}
}
  • L. Valiant
  • Published in STOC '84 5 November 1984
  • Computer Science
Humans appear to be able to learn new concepts without needing to be programmed explicitly in any conventional sense. [] Key Result The methodology and results suggest concrete principles for designing realistic learning systems.

PAC-learning is Undecidable

TLDR
It is proved that testing for PAC-learnability is undecidable in the Turing sense, exposing a fundamental limitation in the decidability of learning.

Oblivious PAC Learning of Concept Hierarchies

TLDR
An extension of the Probably Approximately Correct (PAC) learning model is introduced to study the problem of learning inclusion hierarchies of concepts (sometimes called is-a hierarchies) from random examples with the property that each run is oblivious of all other runs.

On Exploiting Knowledge and Concept Use in Learning Theory

TLDR
This work discusses analogous results from the literature on human concept learning, and reviews current theories as to how people are able to more effectively learn in the presence of background knowledge and the discovery of information via execution of tasks related to the concept acquisition process.

livious ear ie

TLDR
An extension of the Probably Approximately Correct (PAC) learning model is introduced to study the problem of learning inclusion hierarchies of concepts (sometimes called is-a hierarchies) from random examples with the property that each run is oblivious of all other runs.

Teaching with IMPACT

  • Carl TrimbachM. Littman
  • Computer Science
    Developing Expert Learners: A Roadmap for Growing Confident and Competent Students
  • 2019
TLDR
A new learning framework that provides a role for a knowledgeable, benevolent teacher to guide the process of learning a target concept in a series of "curricular" phases or rounds is proposed, enabling simple, efficient learners to acquire very complex concepts from examples.

On Learnability with Computable Learners

TLDR
The notion of CPAC learnability is proposed, by adding some basic computability requirements into a PAC learning framework, and it is shown that in this framework learnability of a binary hypothesis class is not implied by finiteness of its VC-dimension anymore.

On learning from exercises

Learning From a Monotonous, Ignorant Teacher

TLDR
A class of functions is described where a computer can e ciently learn when it is only permitted to pose membership queries, which has interesting applications in graph theory and data mining.

Learning fallible finite state automata

TLDR
A polynomial time algorithm using membership queries for correcting and learning fallible DFA’s under the uniform distribution is presented.
...

References

SHOWING 1-10 OF 12 REFERENCES

Deductive learning

  • L. Valiant
  • Education
    Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences
  • 1984
A non-technical discussion of a new approach to the problem of concept learning in the context of artificial devices is given. Learning is viewed as a process of acquiring a program for recognizing a

The complexity of theorem-proving procedures

  • S. Cook
  • Mathematics, Computer Science
    STOC
  • 1971
It is shown that any recognition problem solved by a polynomial time-bounded nondeterministic Turing machine can be “reduced” to the problem of determining whether a given propositional formula is a

Inductive Inference: Theory and Methods

TLDR
This survey highlights and explains the main ideas that have been developed in the study of inductive inference, with special emphasis on the relations between the general theory and the specific algorithms and implementations.

Machine learning - an artificial intelligence approach

This book contains tutorial overviews and research papers on contemporary trends in the area of machine learning viewed from an AI perspective. Research directions covered include: learning from

The Handbook of Artificial Intelligence

TLDR
This volume contains a collection of articles by acclaimed experts representing the leading edge of knowledge about the field of AI.

How to construct random functions

TLDR
A constructive theory of randomness for functions, based on computational complexity, is developed, and a pseudorandom function generator is presented that has applications in cryptography, random constructions, and complexity theory.

Probabilistic Methods in Combinatorics

In 1947 Paul Erdős [8] began what is now called the probabilistic method. He showed that if \(\left( {\begin{array}{*{20}{c}} n \\ k \\ \end{array} } \right){{2}^{{1 - \left( {\begin{array}{*{20}{c}}

Pattern classification and scene analysis

TLDR
The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

A complexity theory based on Boolean algebra

  • Sven SkyumL. Valiant
  • Computer Science
    22nd Annual Symposium on Foundations of Computer Science (sfcs 1981)
  • 1981
TLDR
It is shown that much of what is of everyday relevance in Turing-machine-based complexity theory can be replicated easily and naturally in this elementary framework.

Probabi lis tic Academic Press

  • Probabi lis tic Academic Press