Finding syntax in human encephalography with beam search
@inproceedings{Hale2018FindingSI, title={Finding syntax in human encephalography with beam search}, author={John Hale and Chris Dyer and Adhiguna Kuncoro and Jonathan Brennan}, booktitle={ACL}, year={2018} }
Recurrent neural network grammars (RNNGs) are generative models of (tree , string ) pairs that rely on neural networks to evaluate derivational choices. Parsing with them using beam search yields a variety of incremental complexity metrics such as word surprisal and parser action count. When used as regressors against human electrophysiological responses to naturalistic text, they derive two amplitude effects: an early peak and a P600-like later peak. By contrast, a non-syntactic neural…
80 Citations
Localizing syntactic predictions using recurrent neural network grammars
- Computer ScienceNeuropsychologia
- 2020
Comparison of Structural Parsers and Neural Language Models as Surprisal Estimators
- Computer Science, PsychologyFrontiers in Artificial Intelligence
- 2022
Results show that surprisal estimates from the proposed left-corner processing model deliver comparable and often superior fits to self-paced reading and eye-tracking data when compared to those from neural language models trained on much more data, suggesting that the strong linguistic generalizations made by the proposed processing model may help predict humanlike processing costs that manifest in latency-based measures, even when the amount of training data is limited.
Connecting Neural Response measurements & Computational Models of language: a non-comprehensive guide
- BiologyArXiv
- 2022
This survey traces a line from early research linking Event Related Potentials and complexity measures derived from simple language models to contemporary studies employing Artificial Neural Network models trained on large corpora in combination with neural response recordings from multiple modalities using naturalistic stimuli.
Cortical processing of reference in language revealed by computational models
- Psychology
- 2020
The neural fit of three symbolic models that each formalizes a different strand of explanation for pronoun resolution in the cognitive and linguistic literature are evaluated, as well as two deep neural network models with an LSTM or a Transformer architecture that favor the memory-based symbolic model.
Neural language models as psycholinguistic subjects: Representations of syntactic state
- Computer Science, PsychologyNAACL
- 2019
Experimental methodologies which were originally developed in the field of psycholinguistics to study syntactic representation in the human mind are employed to examine neural network model behavior on sets of artificial sentences containing a variety of syntactic complex structures.
Memory-bounded Neural Incremental Parsing for Psycholinguistic Prediction
- Computer Science, PsychologyIWPT
- 2020
Results show that the accuracy gains of neural parsers can be reliably extended to psycholinguistic modeling without risk of distortion due to un-bounded working memory.
From Language to Language-ish: How Brain-Like is an LSTM’s Representation of Atypical Language Stimuli?
- Computer Science, PsychologyFINDINGS
- 2020
It is found that, even for some kinds of nonsensical language, there is a statistically significant relationship between the brain’s activity and the representations of an LSTM, indicating that, at least in some instances, LSTMs and the human brain handle nonsensical data similarly.
Unsupervised Recurrent Neural Network Grammars
- Computer ScienceNAACL
- 2019
An inference network parameterized as a neural CRF constituency parser is developed to maximize the evidence lower bound and apply amortized variational inference to unsupervised learning of RNNGs.
From Language to Language-ish: How Brain-Like is an LSTM's Representation of Nonsensical Language Stimuli?
- Computer Science, PsychologyEMNLP 2020
- 2020
It is found that, even for some kinds of nonsensical language, there is a statistically significant relationship between the brain's activity and the representations of an LSTM, indicating that LSTMs and the human brain handle nonsensical data similarly.
Hierarchical structure guides rapid linguistic predictions during naturalistic listening
- PsychologyPloS one
- 2019
It is found that predictions based on hierarchical structure correlate with the human brain response above-and-beyond predictions based only on sequential information, establishing a link between hierarchical linguistic structure and neural signals that generalizes across the range of syntactic structures found in every-day language.
References
SHOWING 1-10 OF 45 REFERENCES
Abstract linguistic structure correlates with temporal activity during naturalistic comprehension
- BiologyBrain and Language
- 2016
What Do Recurrent Neural Network Grammars Learn About Syntax?
- Computer ScienceEACL
- 2017
By training grammars without nonterminal labels, it is found that phrasal representations depend minimally on nonterminals, providing support for the endocentricity hypothesis.
MEG Evidence for Incremental Sentence Composition in the Anterior Temporal Lobe.
- PsychologyCognitive science
- 2017
Data indicate that the left ATL engages in combinatoric processing that is well characterized by a predictive left-corner parsing strategy, and is associated with sentence-level combinatorics.
Deriving lexical and syntactic expectation-based measures for psycholinguistic modeling via incremental top-down parsing
- LinguisticsEMNLP
- 2009
Novel methods for calculating separate lexical and syntactic surprisal measures from a single incremental parser using a lexicalized PCFG and an approximation to entropy measures that would otherwise be intractable to calculate for a grammar of that size are presented.
Hierarchical structure guides rapid linguistic predictions during naturalistic listening
- PsychologyPloS one
- 2019
It is found that predictions based on hierarchical structure correlate with the human brain response above-and-beyond predictions based only on sequential information, establishing a link between hierarchical linguistic structure and neural signals that generalizes across the range of syntactic structures found in every-day language.
Incremental, Predictive Parsing with Psycholinguistically Motivated Tree-Adjoining Grammar
- Computer ScienceCL
- 2013
This article presents the first broad-coverage probabilistic parser for PLTAG, a variant of TAG that supports all three requirements of TAG, and achieves performance comparable to existing TAG parsers that are incremental but not predictive.
Aligning context-based statistical models of language with brain activity during reading
- Computer Science, PsychologyEMNLP
- 2014
The novel results show that before a new word i is read, brain activity is well predicted by the neural network latent representation of context and the predictability decreases as the brain integrates the word and changes its own representations of context.
Evidence of syntactic working memory usage in MEG data
- BiologyCMCL@NAACL-HLT
- 2015
It is found that frequency effects in naturally-occurring stimuli do not significantly contribute to neural oscillations in any frequency band, which suggests that many modeling claims could be tested on this sort of data even without controlling for frequency effects.
A Neurocomputational Model of the N400 and the P600 in Language Processing
- Computer ScienceCognitive science
- 2017
This neurocomputational model is the first to successfully simulate the N400 and P600 amplitude in language comprehension, and simulations with this model provide a proof of concept of the single‐stream RI account of semantically induced patterns of N400and P600 modulations.