• Publications
  • Influence
Compositional generalization in a deep seq2seq model by separating syntax and semantics
TLDR
This work suggests that separating syntactic from semantic learning may be a useful heuristic for capturing compositional structure, and implements a modification to standard approaches in neural machine translation, imposing an analogous separation.
Deep Predictive Learning in Neocortex and Pulvinar
TLDR
This work proposes a detailed biological mechanism for the widely-embraced idea that learning is based on the differences between predictions and actual outcomes (i.e., predictive error-driven learning), and implemented these mechanisms in a large-scale model of the visual system and found that the simulated inferotemporal (IT) pathway learns to systematically categorize 3D objects according to invariant shape properties, based solely on predictive learning from raw visual inputs.
Compositional Generalization by Factorizing Alignment and Translation
TLDR
This work implements a modification to an existing approach in neural machine translation, imposing an analogous separation between alignment and translation, and suggests that learning to align and to translate in separate modules may be a useful heuristic for capturing compositional structure.
Effects of the presence and absence of amino acids on translation, signaling, and long‐term depression in hippocampal slices from Fmr1 knockout mice
TLDR
It is proposed that the eIF2α response is a cellular attempt to compensate for the lack of regulation of translation by FMRP, and calls for a re‐examination of the mGluR theory of FXS.
How Sequential Interactive Processing Within Frontostriatal Loops Supports a Continuum of Habitual to Controlled Processing
TLDR
This work addresses the distinction between habitual/automatic vs. goal-directed/controlled behavior, from the perspective of a computational model of the frontostriatal loops, and provides a “model-free” dopamine-trained Go/NoGo evaluation of the entire distributed plan/goal/evaluation/prediction state.
Systematicity in a Recurrent Neural Network by Factorizing Syntax and Semantics
TLDR
This work suggests that separating syntactic from semantic learning may be a useful heuristic for capturing compositional structure, and highlights the potential of using cognitive principles to inform inductive biases in deep learning.
Complementary Structure-Learning Neural Networks for Relational Reasoning
TLDR
It is shown that computational models capturing the basic cognitive properties of these two systems can explain relational transitive inferences in both familiar and novel environments, and reproduce key phenomena observed in the fMRI experiment.
Compositional Processing Emerges in Neural Networks Solving Math Problems
TLDR
This work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.
The Structure of Systematicity in the Brain
TLDR
The hippocampal formation may form integrative memories that enable rapid learning of new structure and content representations, and this work proposes that the human brain accomplishes these feats through pathways in the parietal cortex that encode the abstract structure of space, events, and tasks.
...
...