• Corpus ID: 244714362

Long-range and hierarchical language predictions in brains and algorithms

@article{Caucheteux2021LongrangeAH,
  title={Long-range and hierarchical language predictions in brains and algorithms},
  author={Charlotte Caucheteux and Alexandre Gramfort and J. R. King},
  journal={ArXiv},
  year={2021},
  volume={abs/2111.14232}
}
I less than three years, deep learning has made considerable progress in text generation, translation and completion (1– 4) thanks to algorithms trained with a simple learning rule: predicting words from their adjacent context. Remarkably, the activations of these models have been shown to linearly map onto human brain responses to speech and text (5–9). Besides, this mapping appears to primarily depend on the algorithms’ ability to predict future words (7, 8), hence suggesting that this… 

Figures from this paper

A whitening approach for Transfer Entropy permits the application to narrow-band signals
TLDR
An alternative approach based on a whitening of the input signals before computing a bivariate measure of directional time-lagged dependency is proposed, which solves the problems found in the simple simulated systems and explores the behaviour of the measure when applied to delta and theta response components in Magnetoencephalography responses to continuous speech.

References

SHOWING 1-10 OF 74 REFERENCES
Language processing in brains and deep neural networks: computational convergence and its limits
TLDR
Tests on activations of artificial neural networks trained on image, word and sentence processing linearly map onto the hierarchy of human brain responses elicited during a reading task suggest that the compositional - but not the lexical - representations of modern language models converge to a brain-like solution.
Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)
TLDR
It is hypothesize that altering BERT to better align with brain recordings would enable it to also better understand language, and closes the loop to allow the interaction between NLP and cognitive neuroscience to be a true cross-pollination.
Tracking Naturalistic Linguistic Predictions with Deep Neural Language Models
TLDR
Predictability estimates from the neural network offer a much better fit to EEG data from subjects listening to naturalistic narrative than simpler models, and reveal strong surprise responses akin to the P200 and N400.
GPT-2’s activations predict the degree of semantic comprehension in the human brain
TLDR
Comparing deep language models and the brain paves the way to a computational model of semantic comprehension, which shows that GPT-2’s brain predictions significantly correlate with semantic comprehension.
Artificial Neural Networks Accurately Predict Language Processing in the Brain
TLDR
The hypothesis that a drive to predict future inputs may shape human language processing, and perhaps the way knowledge of language is learned and organized in the brain, is supported.
Aligning context-based statistical models of language with brain activity during reading
TLDR
The novel results show that before a new word i is read, brain activity is well predicted by the neural network latent representation of context and the predictability decreases as the brain integrates the word and changes its own representations of context.
Linguistic generalization and compositionality in modern artificial neural networks
  • Marco Baroni
  • Computer Science
    Philosophical Transactions of the Royal Society B
  • 2019
TLDR
It is argued that the intriguing behaviour of these devices should be of interest to linguists and cognitive scientists, as it offers a new perspective on possible computational strategies to deal with linguistic productivity beyond rule-based compositionality.
Neural Language Models Capture Some, But Not All Agreement Attraction Effects
TLDR
The LSTMs captured the critical human behavior in three out of the six experiments, indicating that (1) some agreement attraction phenomena can be captured by a generic sequence processing model, but (2) capturing the other phenomena may require models with more language-specific mechanisms.
Can RNNs learn Recursive Nested Subject-Verb Agreements?
TLDR
A new framework to study recursive processing in RNNs is presented, using subject-verb agreement as a probe into the representations of the neural network, which indicates how neural networks may extract bounded nested tree structures, without learning a systematic recursive rule.
Language Models are Unsupervised Multitask Learners
TLDR
It is demonstrated that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText, suggesting a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.
...
1
2
3
4
5
...