How to Grow a Mind: Statistics, Structure, and Abstraction
@article{Tenenbaum2011HowTG, title={How to Grow a Mind: Statistics, Structure, and Abstraction}, author={Joshua B. Tenenbaum and Charles Kemp and Thomas L. Griffiths and Noah D. Goodman}, journal={Science}, year={2011}, volume={331}, pages={1279 - 1285} }
In coming to understand the world—in learning concepts, acquiring language, and grasping causal relations—our minds make inferences that appear to go far beyond the data available. How do we do it? This review describes recent approaches to reverse-engineering human learning and cognitive development and, in parallel, engineering more humanlike machine learning systems. Computational models that perform probabilistic inference over hierarchies of flexibly structured representations can address…
1,339 Citations
Edinburgh Research Explorer Predicate learning in neural systems
- Computer Science, Biology
- 2019
This work describes one way that structured, functionally-symbolic representations can be instantiated in an artificial neural network, and describes how such latent structures (viz., predicates) can be learned from experience with unstructured data.
Predicate learning in neural systems: using oscillations to discover latent structure
- Computer Science, BiologyCurrent Opinion in Behavioral Sciences
- 2019
A Hierarchical Probabilistic Language-of-Thought Model of Human Visual Concept Learning
- Computer ScienceCogSci
- 2016
A hierarchical model is described in which the rules are stochastic, generative processes, and the rules themselves arise from a higher-level Stochastic- generative process, which is a probabilistic language-of-thought model.
Predicate learning in neural systems: Discovering latent generative structures
- Computer Science, BiologyArXiv
- 2018
The ability to learn predicates from experience, to represent structures compositionally, and to extrapolate to unseen data offers an inroads to understanding and modeling the most complex human behaviors.
Learning physics from dynamical scenes
- Computer Science
- 2014
This work introduces a hierarchical Bayesian framework to explain how people can learn physical theories across multiple timescales and levels of abstraction, and works with more expressive probabilistic program representations suitable for learning the forces and properties that govern how objects interact in dynamic scenes unfolding over time.
Building machines that learn and think like people
- Computer ScienceBehavioral and Brain Sciences
- 2016
It is argued that truly human-like learning and thinking machines should build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems, and harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations.
Toward the neural implementation of structure learning
- Computer Science, PsychologyCurrent Opinion in Neurobiology
- 2016
Holistic Reinforcement Learning: The Role of Structure and Attention
- Psychology, Computer ScienceTrends in Cognitive Sciences
- 2019
Problem Solving as Probabilistic Inference with Subgoaling: Explaining Human Successes and Pitfalls in the Tower of Hanoi
- Computer SciencePLoS Comput. Biol.
- 2016
This study suggests that a probabilistic inference scheme enhanced with subgoals provides a comprehensive framework to study problem solving and its deficits.
Structure and Flexibility in Bayesian Models of Cognition
- Computer Science
- 2015
Probability theory forms a natural framework for explaining the impressive success of people at solving many difficult inductive problems, such as learning words and categories, inferring the…
References
SHOWING 1-10 OF 105 REFERENCES
Letting structure emerge: connectionist and dynamical systems approaches to cognition
- PsychologyTrends in Cognitive Sciences
- 2010
Probabilistic models of cognition: exploring representations and inductive biases
- Psychology, BiologyTrends in Cognitive Sciences
- 2010
Learning a theory of causality.
- PhilosophyPsychological review
- 2011
It is suggested that the most efficient route to causal knowledge may be to build in not an abstract notion of causality but a powerful inductive learning mechanism and a variety of perceptual supports, which have implications for cognitive development.
Probabilistic inference in human semantic memory
- Biology, PsychologyTrends in Cognitive Sciences
- 2006
Structured statistical models of inductive reasoning.
- PhilosophyPsychological review
- 2009
A Bayesian framework is presented that shows how statistical inference can operate over structured background knowledge, and the authors argue that this interaction between structure and statistics is critical for explaining the power and flexibility of human reasoning.
Statistically optimal perception and learning: from behavior to neural representations
- Biology, Computer ScienceTrends in Cognitive Sciences
- 2010
Optimal Predictions in Everyday Cognition
- PsychologyPsychological science
- 2006
This work examined the optimality of human cognition in a more realistic context than typical laboratory studies, asking people to make predictions about the duration or extent of everyday phenomena such as human life spans and the box-office take of movies.
A theory of causal learning in children: causal maps and Bayes nets.
- Computer Science, PsychologyPsychological review
- 2004
Experimental results suggest that 2- to 4-year-old children construct new causal maps and that their learning is consistent with the Bayes net formalism.