# A theory of the learnable

@inproceedings{Valiant1984ATO,
title={A theory of the learnable},
author={Leslie G. Valiant},
booktitle={STOC '84},
year={1984}
}
• L. Valiant
• Published in STOC '84 5 November 1984
• Computer Science
Humans appear to be able to learn new concepts without needing to be programmed explicitly in any conventional sense. [] Key Result The methodology and results suggest concrete principles for designing realistic learning systems.
4,642 Citations

### PAC-learning is Undecidable

• Computer Science
ArXiv
• 2018
It is proved that testing for PAC-learnability is undecidable in the Turing sense, exposing a fundamental limitation in the decidability of learning.

### Oblivious PAC Learning of Concept Hierarchies

An extension of the Probably Approximately Correct (PAC) learning model is introduced to study the problem of learning inclusion hierarchies of concepts (sometimes called is-a hierarchies) from random examples with the property that each run is oblivious of all other runs.

### On Exploiting Knowledge and Concept Use in Learning Theory

This work discusses analogous results from the literature on human concept learning, and reviews current theories as to how people are able to more effectively learn in the presence of background knowledge and the discovery of information via execution of tasks related to the concept acquisition process.

### livious ear ie

An extension of the Probably Approximately Correct (PAC) learning model is introduced to study the problem of learning inclusion hierarchies of concepts (sometimes called is-a hierarchies) from random examples with the property that each run is oblivious of all other runs.

### Teaching with IMPACT

• Computer Science
Developing Expert Learners: A Roadmap for Growing Confident and Competent Students
• 2019
A new learning framework that provides a role for a knowledgeable, benevolent teacher to guide the process of learning a target concept in a series of "curricular" phases or rounds is proposed, enabling simple, efficient learners to acquire very complex concepts from examples.

### On Learnability with Computable Learners

• Computer Science
• 2020
The notion of CPAC learnability is proposed, by adding some basic computability requirements into a PAC learning framework, and it is shown that in this framework learnability of a binary hypothesis class is not implied by finiteness of its VC-dimension anymore.

### Learning From a Monotonous, Ignorant Teacher

A class of functions is described where a computer can e ciently learn when it is only permitted to pose membership queries, which has interesting applications in graph theory and data mining.

### Learning fallible finite state automata

• Computer Science
COLT '93
• 1993
A polynomial time algorithm using membership queries for correcting and learning fallible DFA’s under the uniform distribution is presented.

## References

SHOWING 1-10 OF 12 REFERENCES

### Deductive learning

• L. Valiant
• Education
Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences
• 1984
A non-technical discussion of a new approach to the problem of concept learning in the context of artificial devices is given. Learning is viewed as a process of acquiring a program for recognizing a

### The complexity of theorem-proving procedures

• S. Cook
• Mathematics, Computer Science
STOC
• 1971
It is shown that any recognition problem solved by a polynomial time-bounded nondeterministic Turing machine can be “reduced” to the problem of determining whether a given propositional formula is a

### Inductive Inference: Theory and Methods

• Computer Science
CSUR
• 1983
This survey highlights and explains the main ideas that have been developed in the study of inductive inference, with special emphasis on the relations between the general theory and the specific algorithms and implementations.

### Machine learning - an artificial intelligence approach

• Computer Science
Symbolic computation
• 1984
This book contains tutorial overviews and research papers on contemporary trends in the area of machine learning viewed from an AI perspective. Research directions covered include: learning from

### The Handbook of Artificial Intelligence

• Art
• 1982
This volume contains a collection of articles by acclaimed experts representing the leading edge of knowledge about the field of AI.

### How to construct random functions

• Computer Science, Mathematics
JACM
• 1986
A constructive theory of randomness for functions, based on computational complexity, is developed, and a pseudorandom function generator is presented that has applications in cryptography, random constructions, and complexity theory.

### Probabilistic Methods in Combinatorics

In 1947 Paul Erdős [8] began what is now called the probabilistic method. He showed that if \(\left( {\begin{array}{*{20}{c}} n \\ k \\ \end{array} } \right){{2}^{{1 - \left( {\begin{array}{*{20}{c}}

### Pattern classification and scene analysis

• Computer Science
A Wiley-Interscience publication
• 1973
The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

### A complexity theory based on Boolean algebra

• Computer Science
22nd Annual Symposium on Foundations of Computer Science (sfcs 1981)
• 1981
It is shown that much of what is of everyday relevance in Turing-machine-based complexity theory can be replicated easily and naturally in this elementary framework.

### Probabi lis tic Academic Press

• Probabi lis tic Academic Press