Quantifying Inductive Bias: AI Learning Algorithms and Valiant's Learning Framework

@article{Haussler1988QuantifyingIB,
  title={Quantifying Inductive Bias: AI Learning Algorithms and Valiant's Learning Framework},
  author={David Haussler},
  journal={Artif. Intell.},
  year={1988},
  volume={36},
  pages={177-221}
}
  • D. Haussler
  • Published 1988
  • Mathematics, Computer Science
  • Artif. Intell.
Abstract We show that the notion of inductive bias in concept learning can be quantified in a way that directly relates to learning performance in the framework recently introduced by Valiant. Our measure of bias is based on the growth function introduced by Vapnik and Chervonenkis, and on the Vapnik-Chervonenkis dimension. We measure some common language biases, including restriction to conjunctive concepts, conjunctive concepts with internal disjunction, k -DNF and k -CNF concepts. We also… Expand
Robust k-DNF Learning via Inductive Belief Merging
TLDR
The paradigm of inductive belief merging is introduced which handles the issue of inconsistency within a uniform framework and a greedy algorithm is developed which approximates the optimal concepts to within a logarithmic factor. Expand
Learning hard concepts through constructive induction: framework and rationale
TLDR
This work argues for a specific approach to constructive induction that reduces variation by incorporating various kinds of domain knowledge, i.e., transformations that group together non‐contiguous portions of feature space having similar class‐membership values. Expand
Extending the Valiant framework to detect incorrect bias
TLDR
It is shown how to convert an existing pac-learning algorithm into one with reliable bias-evaluation, and the assumption that the target concept belongs to a given concept class holds holds then the output of a learning algorithm is (1 <$)-reliable. Expand
Learning conjunctive concepts in structural domains
TLDR
This class of concepts is formally defined, and it is shown that for any fixed bound on the number of objects per scene, this class is polynomially learnable if, in addition to providing random examples, the learning algorithm is allowed to make subset queries. Expand
Quantifying the Value of Constructive Induction, Knowledge, and Noise Filtering on Inductive Learning
  • C. Kadie
  • Mathematics, Computer Science
  • ML
  • 1991
TLDR
The effective dimension is defined, a new learning measure that empirically links problem properties to learning performance that is more widely applicable to machine-and human-learning research. Expand
Learning Conjunctive Concepts in Structural Domains
TLDR
It is shown that heuristic methods for learning from larger scenes are likely to give an accurate hypothesis if they produce a simple hypothesis consistent with a large enough random sample and that this class of concepts is polynomiaIIy learnable from random examples in the sense of Valiant. Expand
Learning DNF Via Probabilistic Evidence Combination
TLDR
A learning algorithm that follows the central idea here is to model as representational noise the uncertainty as to whether a positive example should be treated as positive for a particular disjunct, in addition to whatever other noise may be imposed on the data by the environment. Expand
How to Shift Bias: Lessons from the
An inductive learning algorithm takes a set of data as input and generates a hypothesis as output. A set of data is typically consistent with an infinite number of hypotheses; therefore, there mustExpand
Partial Occam's Razor and Its Applications
TLDR
This work obtains a non proper PAC learning algorithm for k-DNF, which has similar sample complexity as Littlestone's Winnow, but produces hypothesis of size polynomial in d and log k for a k- DNF target with n variables and d terms, and demonstrates with some examples that some improvement is possible by this approach in particular in the hypothesis size. Expand
Quantifying prior determination knowledge using the PAC learning model
TLDR
This paper demonstrates that PAC learning can be used to analyzesemantic bias, such as a domain theory about the concept being learned, and presents an analysis of determinations, a type of relevance knowledge. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 35 REFERENCES
Learning Conjunctive Concepts in Structural Domains
TLDR
It is shown that heuristic methods for learning from larger scenes are likely to give an accurate hypothesis if they produce a simple hypothesis consistent with a large enough random sample and that this class of concepts is polynomiaIIy learnable from random examples in the sense of Valiant. Expand
Learning Quickly When Irrelevant Attributes Abound: A New Linear-Threshold Algorithm
  • N. Littlestone
  • Mathematics
  • 28th Annual Symposium on Foundations of Computer Science (sfcs 1987)
  • 1987
Valiant (1984) and others have studied the problem of learning various classes of Boolean functions from examples. Here we discuss incremental learning of these functions. We consider a setting inExpand
Shift of bias for inductive concept learning
TLDR
It is shown that search for appropriate bias is itself a major part of the learning task, and that mechanical procedures for conducting a well-directed search for an appropriate bias can be created. Expand
An Analytical Comparison of Some Rule-Learning Programs
TLDR
This work compares the rule-learning programs of Brazdil, Langley, Mitchell, Mitchell et al, Shapiro, Waterman, and Quinlan with the concept- learning programs of Quinlan, and Young et al. Expand
A theory of the learnable
  • L. Valiant
  • Mathematics, Computer Science
  • STOC '84
  • 1984
TLDR
This paper regards learning as the phenomenon of knowledge acquisition in the absence of explicit programming, and gives a precise methodology for studying this phenomenon from a computational viewpoint. Expand
Learning decision trees from random examples
Abstract We define the rank of a decision tree and show that for any fixed r , the class of all decision trees of rank at most r on n Boolean variables is learnable from random examples in timeExpand
The Logic of Learning: A Basis for Pattern Recognition and for Improvement of Performance
TLDR
This chapter discusses the logic of learning and defines the phenomenon of pattern recognition, concluding that the study of learning has been directed to specific tasks and accordingly many basic problems have been clarified. Expand
A Comparative Review of Selected Methods for Learning from Examples
TLDR
Methods for finding the maximally-specific conjunctive generalizations (MSC-generalizations) that cover all of the training examples of a given concept are examined. Expand
Learning in the presence of malicious errors
TLDR
A practical extension to the Valiant model of machine learning from examples, where the presence of errors, possibly maliciously generated by an adversary, in the sample data is studied to preserve an error-free oracle for examples of the function being learned. Expand
Two New Frameworks for Learning
TLDR
Two new formal frameworks for learning are presented, one exploring learning in the sense of improving computational efficiency as opposed to acquiring an unknown concept or function and the other exploring the acquisition of heuristics over problem domains of special structure. Expand
...
1
2
3
4
...