How to Grow a Mind: Statistics, Structure, and Abstraction

@article{Tenenbaum2011HowTG,
  title={How to Grow a Mind: Statistics, Structure, and Abstraction},
  author={Joshua B. Tenenbaum and Charles Kemp and Thomas L. Griffiths and Noah D. Goodman},
  journal={Science},
  year={2011},
  volume={331},
  pages={1279 - 1285}
}
In coming to understand the world—in learning concepts, acquiring language, and grasping causal relations—our minds make inferences that appear to go far beyond the data available. How do we do it? This review describes recent approaches to reverse-engineering human learning and cognitive development and, in parallel, engineering more humanlike machine learning systems. Computational models that perform probabilistic inference over hierarchies of flexibly structured representations can address… 
Edinburgh Research Explorer Predicate learning in neural systems
TLDR
This work describes one way that structured, functionally-symbolic representations can be instantiated in an artificial neural network, and describes how such latent structures (viz., predicates) can be learned from experience with unstructured data.
Predicate learning in neural systems: using oscillations to discover latent structure
A Hierarchical Probabilistic Language-of-Thought Model of Human Visual Concept Learning
TLDR
A hierarchical model is described in which the rules are stochastic, generative processes, and the rules themselves arise from a higher-level Stochastic- generative process, which is a probabilistic language-of-thought model.
Predicate learning in neural systems: Discovering latent generative structures
TLDR
The ability to learn predicates from experience, to represent structures compositionally, and to extrapolate to unseen data offers an inroads to understanding and modeling the most complex human behaviors.
Learning physics from dynamical scenes
TLDR
This work introduces a hierarchical Bayesian framework to explain how people can learn physical theories across multiple timescales and levels of abstraction, and works with more expressive probabilistic program representations suitable for learning the forces and properties that govern how objects interact in dynamic scenes unfolding over time.
Building machines that learn and think like people
TLDR
It is argued that truly human-like learning and thinking machines should build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems, and harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations.
Toward the neural implementation of structure learning
Holistic Reinforcement Learning: The Role of Structure and Attention
Problem Solving as Probabilistic Inference with Subgoaling: Explaining Human Successes and Pitfalls in the Tower of Hanoi
TLDR
This study suggests that a probabilistic inference scheme enhanced with subgoals provides a comprehensive framework to study problem solving and its deficits.
Structure and Flexibility in Bayesian Models of Cognition
Probability theory forms a natural framework for explaining the impressive success of people at solving many difficult inductive problems, such as learning words and categories, inferring the
...
...

References

SHOWING 1-10 OF 105 REFERENCES
Learning a theory of causality.
TLDR
It is suggested that the most efficient route to causal knowledge may be to build in not an abstract notion of causality but a powerful inductive learning mechanism and a variety of perceptual supports, which have implications for cognitive development.
Probabilistic inference in human semantic memory
Structured statistical models of inductive reasoning.
TLDR
A Bayesian framework is presented that shows how statistical inference can operate over structured background knowledge, and the authors argue that this interaction between structure and statistics is critical for explaining the power and flexibility of human reasoning.
Optimal Predictions in Everyday Cognition
TLDR
This work examined the optimality of human cognition in a more realistic context than typical laboratory studies, asking people to make predictions about the duration or extent of everyday phenomena such as human life spans and the box-office take of movies.
A theory of causal learning in children: causal maps and Bayes nets.
TLDR
Experimental results suggest that 2- to 4-year-old children construct new causal maps and that their learning is consistent with the Bayes net formalism.
...
...