The Structure of Systematicity in the Brain

  title={The Structure of Systematicity in the Brain},
  author={Randall C. O’Reilly and Charan Ranganath and Jacob Russin},
  journal={Current Directions in Psychological Science},
  pages={124 - 130}
A hallmark of human intelligence is the ability to adapt to new situations by applying learned rules to new content (systematicity) and thereby enabling an open-ended number of inferences and actions (generativity). Here, we propose that the human brain accomplishes these feats through pathways in the parietal cortex that encode the abstract structure of space, events, and tasks and pathways in the temporal cortex that encode information about specific people, places, and things (content… 

Figures from this paper

Correcting the hebbian mistake: Toward a fully error-driven hippocampus

It is shown that using a different form of learning based on correcting errors (error-driven learning) results in significantly improved episodic memory function in a biologically-based computational model of the hippocampus.

Dynamic hippocampal-cortical interactions during event boundaries support retention of complex narrative events

Data demonstrate that the relationship between memory encoding and hippocampal-neocortical interaction is more dynamic than suggested by most memory theories, and they converge with recent modeling work suggesting that event offset is an optimal time for encoding.

Systematicity Emerges in Transformers when Abstract Grammatical Roles Guide Attention

This work develops a novel modification to the transformer by implementing two separate input streams: a role stream controls the attention distributions at each layer, and a filler stream determines the values.



How Limited Systematicity Emerges: A Computational Cognitive Neuroscience Approach (Author's Manuscript)

This chapter addresses the claims made by Fodor & Pylyshyn (1988) and strikes a middle ground between classic symbolic and connectionist perspectives, arguing that cognition is less systematic than classicists claim, but that connectionist, neural-processing-based theories have yet to explain the extent to which it is systematic.

Complementary Structure-Learning Neural Networks for Relational Reasoning

It is shown that computational models capturing the basic cognitive properties of these two systems can explain relational transitive inferences in both familiar and novel environments, and reproduce key phenomena observed in the fMRI experiment.

The neurobiology of semantic memory

Concepts and Compositionality: In Search of the Brain's Language of Thought.

Clues from disparate areas of cognitive neuroscience are assembled, integrating recent research on language, memory, episodic simulation, and computational models of high-level cognition to highlight emerging work on combinatorial processes in the brain and consider this work's relation to the language of thought.

Two cortical systems for memory-guided behaviour

It is suggested that the PRC and PHC–RSC are core components of two separate large-scale cortical networks that are dissociable by neuroanatomy, susceptibility to disease and function.

The hippocampus as a predictive map

It is argued that entorhinal grid cells encode a low-dimensionality basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.

Emergent Symbols through Binding in External Memory

This work introduces the Emergent Symbol Binding Network (ESBN), a recurrent network augmented with an external memory that enables a form of variable-binding and indirection, enabling the ESBN to learn rules in a manner that is abstracted away from the particular entities to which those rules apply.

Deep Predictive Learning in Neocortex and Pulvinar

This work proposes a detailed biological mechanism for the widely-embraced idea that learning is based on the differences between predictions and actual outcomes (i.e., predictive error-driven learning), and implemented these mechanisms in a large-scale model of the visual system and found that the simulated inferotemporal (IT) pathway learns to systematically categorize 3D objects according to invariant shape properties, based solely on predictive learning from raw visual inputs.