• Publications
  • Influence
Human-level concept learning through probabilistic program induction
Handwritten characters drawn by a model Not only do children learn effortlessly, they do so quickly and with a remarkable ability to use what they have learned as the raw material for creating newExpand
  • 1,304
  • 156
  • Open Access
One shot learning of simple visual concepts
One shot learning of simple visual concepts Brenden M. Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua B. Tenenbaum Department of Brain and Cognitive Sciences Massachusetts Institute ofExpand
  • 382
  • 59
  • Open Access
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end inExpand
  • 930
  • 48
  • Open Access
Generalization without Systematicity: On the Compositional Skills of Sequence-to-Sequence Recurrent Networks
Humans can understand and produce new utterances effortlessly, thanks to their compositional skills. Once a person learns the meaning of a new verb "dax," he or she can immediately understand theExpand
  • 152
  • 18
  • Open Access
One-shot learning by inverting a compositional causal process
People can learn a new visual class from just one example, yet machine learning algorithms typically require hundreds or thousands of examples to tackle the same problems. Here we present aExpand
  • 150
  • 16
  • Open Access
Discovering Structure by Learning Sparse Graphs
Systems of concepts such as colors, animals, cities, and artifacts are richly structured, and people discover the structure of these domains throughout a lifetime of experience. Discovering structureExpand
  • 102
  • 15
  • Open Access
Compositional generalization through meta sequence-to-sequence learning
  • B. Lake
  • Computer Science
  • NeurIPS
  • 12 June 2019
People can learn a new concept and use it compositionally, understanding how to "blicket twice" after learning how to "blicket." In contrast, powerful sequence-to-sequence (seq2seq) neural networksExpand
  • 39
  • 8
  • Open Access
Rearranging the Familiar: Testing Compositional Generalization in Recurrent Networks
Systematic compositionality is the ability to recombine meaningful units with regular and predictable outcomes, and it's seen as key to humans' capacity for generalization in language. Recent workExpand
  • 48
  • 5
  • Open Access
Still not systematic after all these years: On the compositional skills of sequence-to-sequence recurrent networks
Humans can understand and produce new utterances effortlessly, thanks to their systematic compositional skills. Once a person learns the meaning of a new verb "dax," he or she can immediatelyExpand
  • 45
  • 4
  • Open Access
Human few-shot learning of compositional instructions
People learn in fast and flexible ways that have not been emulated by machines. Once a person learns a new verb "dax," he or she can effortlessly understand how to "dax twice," "walk and dax," orExpand
  • 27
  • 3
  • Open Access