Corpus ID: 9761640

Bootstrap Learning via Modular Concept Discovery

@inproceedings{Dechter2013BootstrapLV,
  title={Bootstrap Learning via Modular Concept Discovery},
  author={Eyal Dechter and Jonathan Malmaud and Ryan P. Adams and Joshua B. Tenenbaum},
  booktitle={IJCAI},
  year={2013}
}
Suppose a learner is faced with a domain of problems about which it knows nearly nothing. It does not know the distribution of problems, the space of solutions is not smooth, and the reward signal is uninformative, providing perhaps a few bits of information but not enough to steer the learner effectively. How can such a learner ever get off the ground? A common intuition is that if the solutions to these problems share a common structure, and the learner can solve some simple problems by brute… Expand
Learning as program induction
This workshop will cover new work that casts human learning as program induction — i.e. learning of programs from data. The notion that the mind approximates rational (Bayesian) inference has had aExpand
Concept Learning for Regular Expressions
Evolutionary algorithms aim to harness ideas from biological evolution in order to explore search spaces that may be hard to understand at a glance. Instead of viewing search spaces from anExpand
Automatically Composing Representation Transformations as a Means for Generalization
TLDR
The compositional problem graph is introduced as a broadly applicable formalism to relate tasks of different complexity in terms of problems with shared subproblems and the compositional recursive learner is introduced, a domain-general framework for learning algorithmic procedures for composing representation transformations. Expand
Learning Graphical Concepts
How can machine learning techniques be used to solve problems whose solutions are best represented as computer programs? For example, suppose a researcher wants to design a probabilistic graphicalExpand
Interactive learning of cognitive programs
Humans can identify and replicate high-level visual concepts easily from a few pairs of images. They can even learn from a single example, especially if they are able to ask a few clarifyingExpand
Knowledge transfer in a probabilistic Language Of Thought
TLDR
It is shown that participants’ ability to learn the second sequence is affected the first sequence they saw, and two probabilistic models are tested to evaluate alternative theories of how algorithmic knowledge is transferred from the first to second sequence. Expand
Using the language of thought
In this thesis, I develop and explore two novel models of how humans might be able to acquire high-level conceputal knowledge by performing probabilistic inference over a language of thought (FodorExpand
Learning abstract visual concepts via probabilistic program induction in a Language of Thought
TLDR
This work formalizes the Hierarchical Language of Thought model of rule learning, and demonstrates that a cognitive model combining symbolic variables with Bayesian inference and stochastic program primitives provides a new perspective for understanding people's patterns of generalization. Expand
Intelligence, physics and information - the tradeoff between accuracy and simplicity in machine learning
TLDR
This thesis addresses several key questions in some aspects of intelligence, and study the phase transitions in the two-term tradeoff, using strategies and tools from physics and information. Expand
Playgol: learning programs through play
TLDR
The idea of playing (or, more verbosely, unsupervised bootstrapping for supervised program induction) is an important contribution to the problem of developing program induction approaches that self-discover BK. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 29 REFERENCES
Repeat Learning Using Predicate Invention
TLDR
A new predicate invention mechanism implemented in Progol4.4 is used in repeat learning experiments within a chess domain and the results indicate that significant performance increases can be achieved. Expand
Substantial Constructive Induction Using Layered Information Compression: Tractable Feature Formation in Search
TLDR
This paper addresses a problem of induction (generalization learning) which is more difficult than any comparable work in AI and achieves considerable generality with superior noise management and low computational complexity. Expand
Duce, An Oracle-based Approach to Constructive Induction
TLDR
Duce is illustrated by way of its construction of a simple animal taxonomy and a hierarchical parity checker, which is compared to the structure interactively created by Duce. Expand
Learning Programs: A Hierarchical Bayesian Approach
TLDR
A nonparametric hierarchical Bayesian prior over programs which shares statistical strength across multiple tasks is introduced and an MCMC algorithm is provided that can perform safe program transformations on this representation to reveal shared inter-program substructures. Expand
Grammatical complexity and inference
The problem of inferring a grammar for a set of symbol strings is considered and a number of new decidability results obtained. Several notions of grammatical complexity and their properties areExpand
Identifying Hierarchical Structure in Sequences: A linear-time algorithm
SEQUITUR is an algorithm that infers a hierarchical structure from a sequence of discrete symbols by replacing repeated phrases with a grammatical rule that generates the phrase, and continuing thisExpand
Hierarchical Automatic Function Definition in Genetic Programming
TLDR
Two extensions to genetic programming are described, called “automatic’ function definition and “hierarchical automatic” function definition, wherein functions that might be useful in solving a problem are automatically and dynamically defined during a run in terms of dummy variables. Expand
Functional Genetic Programming with Combinators
Prior program representations for genetic programming that incorporated features of modern programming languages solved harder problems than earlier representations, but required more complex geneticExpand
Principles of Artificial Intelligence
  • N. Nilsson
  • Computer Science
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 1981
TLDR
This classic introduction to artificial intelligence describes fundamental AI ideas that underlie applications such as natural language processing, automatic programming, robotics, machine vision, automatic theorem proving, and intelligent data retrieval. Expand
Genetic programming - on the programming of computers by means of natural selection
  • J. Koza
  • Computer Science
  • Complex adaptive systems
  • 1993
TLDR
This book discusses the evolution of architecture, primitive functions, terminals, sufficiency, and closure, and the role of representation and the lens effect in genetic programming. Expand
...
1
2
3
...