From Language to Language-ish: How Brain-Like is an LSTM's Representation of Nonsensical Language Stimuli?

@article{Hashemzadeh2020FromLT,
  title={From Language to Language-ish: How Brain-Like is an LSTM's Representation of Nonsensical Language Stimuli?},
  author={Maryam Hashemzadeh and Greta Kaufeld and Martha White and Andrea E. Martin and Alona Fyshe},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.07435}
}
The representations generated by many models of language (word embeddings, recurrent neural networks and transformers) correlate to brain activity recorded while people read. However, these decoding results are usually based on the brain's reaction to syntactically and semantically sound language stimuli. In this study, we asked: how does an LSTM (long short term memory) language model, trained (by and large) on semantically and syntactically intact language, represent a language sample with… Expand
2 Citations

Figures and Tables from this paper

Neuro-computational models of language processing
Efforts to understand the brain bases of language face the mapping problem: at what level do linguistic computations and representations connect to human neurobiology? We review one approach to thisExpand
Modeling Neurodegeneration in silico With Deep Learning
Deep neural networks, inspired by information processing in the brain, can achieve human-like performance for various tasks. However, research efforts to use these networks as models of the brainExpand

References

SHOWING 1-10 OF 31 REFERENCES
Incorporating Context into Language Encoding Models for fMRI
TLDR
The models built here show a significant improvement in encoding performance relative to state-of-the-art embeddings in nearly every brain area and suggest that LSTM language models learn high-level representations that are related to representations in the human brain. Expand
Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)
TLDR
It is hypothesize that altering BERT to better align with brain recordings would enable it to also better understand language, and closes the loop to allow the interaction between NLP and cognitive neuroscience to be a true cross-pollination. Expand
Character-Aware Neural Language Models
TLDR
A simple neural language model that relies only on character-level inputs that is able to encode, from characters only, both semantic and orthographic information and suggests that on many languages, character inputs are sufficient for language modeling. Expand
Aligning context-based statistical models of language with brain activity during reading
TLDR
The novel results show that before a new word i is read, brain activity is well predicted by the neural network latent representation of context and the predictability decreases as the brain integrates the word and changes its own representations of context. Expand
Understanding language-elicited EEG data by predicting it from a fine-tuned language model
TLDR
This work takes a step towards better understanding the event-related potentials by finetuning a language model to predict them, and shows for the first time that all of the ERPs are predictable from embeddings of a stream of language. Expand
The lexical semantics of adjective–noun phrases in the human brain
TLDR
This work explores lexical semantics using magnetoencephalography recordings of people reading adjective–noun phrases presented one word at a time to reveal two novel findings: a neural representation of the adjective is present during noun presentation, but this representation is different from that observed during adjective presentation. Expand
Cortical representation of the constituent structure of sentences
TLDR
In several inferior frontal and superior temporal regions, activation was delayed in response to the largest constituent structures, suggesting that nested linguistic structures take increasingly longer time to be computed and that these delays can be measured with fMRI. Expand
Lexical and syntactic representations in the brain: An fMRI investigation with multi-voxel pattern analyses
TLDR
It is shown that lexical information is represented more robustly than syntactic information across many language regions (with no language region showing the opposite pattern), as evidenced by a better discrimination between conditions that differ along the lexical dimension than along the syntactic dimension. Expand
A Compositional Neural Architecture for Language
TLDR
The result is an emerging theory of how the brain could convert distributed perceptual representations into hierarchical structures across multiple timescales while representing interpretable incremental stages of (de)compositional meaning. Expand
Finding syntax in human encephalography with beam search
TLDR
This pattern of results recommends the RNNG+beam search combination as a mechanistic model of the syntactic processing that occurs during normal human language comprehension. Expand
...
1
2
3
4
...