On Exploiting Knowledge and Concept Use in Learning Theory

Abstract

In the past fteen years, various formal models of concept learning have successfully been employed to answer the question of what types of concepts can be eeciently inferred from examples. The answer appears to be \only simple ones". Perhaps due to the ease of formal analysis , our investigations have focused on learning artiicial, syntactically-described concepts in \sterile", knowledge-free environments. We discuss analogous results from the literature on human concept learning (people don't do too well either), and review current theories as to how people are able to more eeectively learn in the presence of background knowledge and the discovery of information via execution of tasks related to the concept acquisition process. We consider the formal modeling of such phenomena as an important challenge for learning theory. The learning theory community has had no dearth of problems to address, in part because of the interdisciplinary nature of the eld. The call for papers for this conference speciically invited those dealing with any of a very wide range of topics, from artiicial and biological neural nets, to case-based learning; from inductive inference, to Bayesian estimation; from computational logic, to data mining, among others. Notably absent from the list (or similar lists from related conferences) are papers dealing with the interaction between algorithmic learning theory and the science of human (cognitive) learning. Does learning theory concern itself only with the discovery of models, capabilities, and limitations of machine learning, and the underlying mathematical principles, or is the mod-eling of human learning also a concern? While it is unlikely that there is any consensus on this issue, it would seem unwise for even those interested only in the former to completely ignore the latter, as humans are arguably the best learning agents yet invented. The study of artiicial neural networks provides insight into a primitive model of the hardware of our best known biological learning agent. Our community has both made contributions to, and taken inspirations from, this area (e.g., the work of Haussler and Baum (1989) giving sample complexity bounds for valid generalization in neural nets, and the paper by Maass in this volume, which considers the relevance of time in neural computations and learning). While neural networks can be used to model some cognitive functions, the eld of cognitive science deals with much broader issues such as those of cognition,

DOI: 10.1007/3-540-63577-7_36

Extracted Key Phrases

Cite this paper

@inproceedings{Pitt1997OnEK, title={On Exploiting Knowledge and Concept Use in Learning Theory}, author={Leonard Pitt}, booktitle={ALT}, year={1997} }