Learn More
Although it is generally accepted that hierarchical phrase structures are instrumental in describing human language, their role in cognitive processing is still debated. We investigated the role of hierarchical structure in sentence processing by implementing a range of probabilistic language models, some of which depended on hierarchical structure, and(More)
Connectionist models of sentence processing must learn to behave systematically by generalizing from a small training set. To what extent recurrent neural networks manage this generalization task is investigated. In contrast to Van der Velde et al., it is found that simple recurrent networks do show so-called weak combinatorial systematicity, although their(More)
An English double-embedded relative clause from which the middle verb is omitted can often be processed more easily than its grammatical counterpart, a phenomenon known as the grammaticality illusion. This effect has been found to be reversed in German, suggesting that the illusion is language specific rather than a consequence of universal working memory(More)
The notion of prediction is studied in cognitive neuroscience with increasing intensity. We investigated the neural basis of 2 distinct aspects of word prediction, derived from information theory, during story comprehension. We assessed the effect of entropy of next-word probability distributions as well as surprisal A computational model determined entropy(More)
Fodor and Pylyshyn [Fodor, J. A., & Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28, 3-71] argue that connectionist models are not able to display systematicity other than by implementing a classical symbol system. This claim entails that connectionism cannot compete with the classical approach as an(More)
A computational model of inference during story comprehension is presented, in which story situations are represented distributively as points in a high-dimensional " situation-state space. " This state space organizes itself on the basis of a constructed microworld description. From the same description, causal/temporal world knowledge is extracted. The(More)
We investigated the effect of word sur-prisal on the EEG signal during sentence reading. On each word of 205 experimental sentences, surprisal was estimated by three types of language model: Markov models, probabilistic phrase-structure grammars, and recurrent neu-ral networks. Four event-related potential components were extracted from the EEG of 24(More)
Probabilistic accounts of language processing can be psychologically tested by comparing word-reading times (RT) to the conditional word probabilities estimated by language models. Using surprisal as a linking function, a significant correlation between unlexicalized surprisal and RT has been reported (e.g., Demberg and Keller, 2008), but success using(More)