• Corpus ID: 235390553

Low-Dimensional Structure in the Space of Language Representations is Reflected in Brain Responses

  title={Low-Dimensional Structure in the Space of Language Representations is Reflected in Brain Responses},
  author={Richard J. Antonello and Javier Turek and Vy A. Vo and Alexander G. Huth},
How related are the representations learned by neural language models, translation models, and language tagging tasks? We answer this question by adapting an encoder-decoder transfer learning method from computer vision to investigate the structure among 100 different feature spaces extracted from hidden representations of various networks trained on language tasks. This method reveals a low-dimensional structure where language models and translation models smoothly interpolate between word… 

Figures and Tables from this paper

Brain embeddings with shared geometry to artificial contextual embeddings, as a code for representing language in the human brain

Using stringent, zero-shot mapping, it is demonstrated that brain embeddings in the IFG and the DLM contextual embedding space have strikingly similar geometry, which allows us to precisely triangulate the position of unseen words in both the brain embedding spaces.

Reconstructing the cascade of language processing in the brain using the internal computations of a transformer-based language model

This paper decomposes the associated “transformations” into individual, functionally-specialized “attention heads” and demonstrates that the emergent syntactic computations performed by individual heads correlate with predictions of brain activity in specific cortical regions.

Neural Language Taskonomy: Which NLP Tasks are the most Predictive of fMRI Brain Activity?

Transfer learning from representations learned for ten popular natural language processing tasks (two syntactic and eight semantic) for predicting brain responses from two diverse datasets: Pereira and Narratives.

A natural language fMRI dataset for voxelwise encoding models

A dataset containing BOLD fMRI responses recorded while 8 subjects each listened to 27 complete, natural, narrative stories, accompanied by a python library containing basic code for creating voxelwise encoding models provides a large and novel resource for understanding speech and language processing in the human brain.

Self-supervised models of audio effectively explain human cortical responses to speech

Overall, these results show that self-supervised models effectively capture the hierarchy of information relevant to different stages of speech processing in human cortex.

Connecting Neural Response measurements & Computational Models of language: a non-comprehensive guide

This survey traces a line from early research linking Event Related Potentials and complexity measures derived from simple language models to contemporary studies employing Artificial Neural Network models trained on large corpora in combination with neural response recordings from multiple modalities using naturalistic stimuli.

Toward a realistic model of speech processing in the brain with self-supervised learning

The largest neuroimaging benchmark to date is shown, showing how self-supervised learning can account for a rich organization of speech processing in the brain, and thus delineate a path to identify the laws of language acquisition which shape the human brain.

Multimodal foundation models are better simulators of the human brain

It is proposed to explore the explainability of multimodal learning models with the aid of non-invasive brain imaging technologies such as functional magnetic resonance imaging (fMRI) and identify a number of brain regions where multimodally-trained encoders demonstrate better neural encoding performance.

Reprint: a randomized extrapolation based on principal components for data augmentation

R EPRINT is appealing for its easy-to-use since it contains only one hyperparameter determining the dimension of subspace and requires low computational resource and suggests stable and consistent improvements in terms of suitable choices of principal components.



Incorporating Context into Language Encoding Models for fMRI

The models built here show a significant improvement in encoding performance relative to state-of-the-art embeddings in nearly every brain area and suggest that LSTM language models learn high-level representations that are related to representations in the human brain.

Language processing in brains and deep neural networks: computational convergence and its limits

Tests on activations of artificial neural networks trained on image, word and sentence processing linearly map onto the hierarchy of human brain responses elicited during a reading task suggest that the compositional - but not the lexical - representations of modern language models converge to a brain-like solution.

Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)

It is hypothesize that altering BERT to better align with brain recordings would enable it to also better understand language, and closes the loop to allow the interaction between NLP and cognitive neuroscience to be a true cross-pollination.

Interpretable multi-timescale models for predicting fMRI responses to continuous natural speech

This work constructs interpretable multi-timescale representations by forcing individual units in an LSTM LM to integrate information over specific temporal scales, which allows us to explicitly and directly map the timescale of information encoded by each individual fMRI voxel.

The neural architecture of language: Integrative reverse-engineering converges on a model for predictive processing

Across models, significant correlations are observed among all three metrics of performance: neural fit, fit to behavioral responses, and accuracy on the next-word prediction task, consistent with the long-standing hypothesis that the brain’s language system is optimized for predictive processing.

Neural Taskonomy: Inferring the Similarity of Task-Derived Representations from Brain Activity

These computationally-driven results—arising out of state-of-the-art computer vision methods—begin to reveal the task-specific architecture of the human visual system.

Lack of selectivity for syntax relative to word meanings throughout the language network

To understand what you are reading now, your mind retrieves the meanings of words and constructions from a linguistic knowledge store (lexico-semantic processing) and identifies the relationships

Natural speech reveals the semantic maps that tile human cerebral cortex

This study systematically map semantic selectivity across the cortex using voxel-wise modelling of functional MRI data collected while subjects listened to hours of narrative stories, and uses a novel generative model to create a detailed semantic atlas.

Simultaneously Uncovering the Patterns of Brain Regions Involved in Different Story Reading Subprocesses

This approach is the first to simultaneously track diverse reading subprocesses during complex story processing and predict the detailed neural representation of diverse story features, ranging from visual word properties to the mention of different story characters and different actions they perform.

The Hierarchical Cortical Organization of Human Speech Processing

To investigate the processing steps performed by the human brain to transform natural speech sound into meaningful language, models based on a hierarchical set of speech features were used to predict BOLD responses of individual voxels recorded in an fMRI experiment while subjects listened to natural speech.