Resource-bounded Dimension in Computational Learning Theory
@article{Gavald2010ResourceboundedDI, title={Resource-bounded Dimension in Computational Learning Theory}, author={Ricard Gavald{\`a} and Mar{\'i}a L{\'o}pez-Vald{\'e}s and Elvira Mayordomo and N. V. Vinodchandran}, journal={ArXiv}, year={2010}, volume={abs/1010.5470} }
This paper focuses on the relation between computational learning theory and resource-bounded dimension. We intend to establish close connections between the learnability/nonlearnability of a concept class and its corresponding size in terms of effective dimension, which will allow the use of powerful dimension techniques in computational learning and viceversa, the import of learning results into complexity via dimension. Firstly, we obtain a tight result on the dimension of online mistake…
One Citation
Mutual dimension, data processing inequalities, and randomness
- Computer Science
- 2016
A framework for mutual dimension is developed, i.e., the density of algorithmic mutual information between two infinite objects, that has similar properties as those of classical Shannon mutual information.
References
SHOWING 1-10 OF 25 REFERENCES
Dimension, Halfspaces, and the Density of Hard Sets
- Mathematics, Computer ScienceTheory of Computing Systems
- 2010
We use the connection between resource-bounded dimension and the online mistake-bound model of learning to show that the following classes have polynomial-time dimension zero. 1.The class of problems…
Online Learning and Resource-Bounded Dimension: Winnow Yields New Lower Bounds for Hard Sets
- Computer ScienceSIAM J. Comput.
- 2005
A relationship is established between the online mistake-bound model of learning and resource-bounded dimension and the Winnow algorithm to obtain new results about the density of hard sets under adaptive reductions.
Learning Quickly When Irrelevant Attributes Abound: A New Linear-Threshold Algorithm
- Computer Science28th Annual Symposium on Foundations of Computer Science (sfcs 1987)
- 1987
This work presents one such algorithm that learns disjunctive Boolean functions, along with variants for learning other classes of Boolean functions.
Equivalence of models for polynomial learnability
- Computer ScienceCOLT '88
- 1988
Learnability and the Vapnik-Chervonenkis dimension
- Computer ScienceJACM
- 1989
This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Resource-Bounded Measure and Learnability
- Mathematics, Computer ScienceProceedings. Thirteenth Annual IEEE Conference on Computational Complexity (Formerly: Structure in Complexity Theory Conference) (Cat. No.98CB36247)
- 1998
A nonuniformly computable variant of resource-bounded measure is introduced and it is shown that, for every fixed polynomial q, anyPolynomial-time learnable subclass of circuits of size q has measure zero with respect to P/poly.
Resource-bounded measure
- Mathematics, Computer ScienceProceedings. Thirteenth Annual IEEE Conference on Computational Complexity (Formerly: Structure in Complexity Theory Conference) (Cat. No.98CB36247)
- 1998
The theory presented here is based on resource-bounded martingale splitting operators, which are type-2 functionals, and the sets of /spl nu/-measurable 0 or 1 in C are shown to be characterized by the success conditions for martingales (type-1 functions) that have been used in resource- bounded measure to date.
A theory of the learnable
- Computer ScienceSTOC '84
- 1984
This paper regards learning as the phenomenon of knowledge acquisition in the absence of explicit programming, and gives a precise methodology for studying this phenomenon from a computational viewpoint.
Queries and concept learning
- Computer ScienceMachine Learning
- 1988
We consider the problem of using queries to learn an unknown concept. Several types of queries are described and studied: membership, equivalence, subset, superset, disjointness, and exhaustiveness…
Learning decision lists
- Computer Science, MathematicsMachine Learning
- 2004
This paper introduces a new representation for Boolean functions, called decision lists, and shows that they are efficiently learnable from examples, and strictly increases the set of functions known to be polynomially learnable, in the sense of Valiant (1984).