Learn More
The 'visual world paradigm' typically involves presenting participants with a visual scene and recording eye movements as they either hear an instruction to manipulate objects in the scene or as they listen to a description of what may happen to those objects. In this study, participants heard each target sentence only after the corresponding visual scene(More)
Two visual-world eyetracking experiments were conducted to investigate whether, how, and when syntactic and semantic constraints are integrated and used to predict properties of subsequent input. Experiment 1 contrasted auditory German constructions such as, "The hare-nominative eats ... (the cabbage-acc)" versus "The hare-accusative eats ... (the(More)
Gary F. Marcus et al. (1) familiarized 7-month-old infants with sequences of syllables generated by an artificial grammar; the infants were then able to discriminate between sequences generated both by that grammar and another, even though sequences in the familiarization and test phases employed different syllables. Marcus et al. stated that their infants(More)
Infants can discriminate between familiar and unfamiliar grammatical patterns expressed in a vocabulary that is distinct from that used earlier during familiarization (Cognition 70(2) (1999) 109; Science 283 (1999) 77). Various models have captured the data, although each required that discrimination be distinct, in terms of the computational process, from(More)
When participants are presented simultaneously with spoken language and a visual display depicting objects to which that language refers, participants spontaneously fixate the visual referents of the words being heard [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of(More)
Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants generated(More)
Ferreira et al. [1] outline an 'integrated representation theory' of the 'looking at nothing' phenomenon that we have previously documented [2–9]. We largely agree with the explanation by Ferreira et al. because we have argued for the same mechanisms ourselves in prior publications. Their claim to novelty rests upon a misrepresentation of our views (see Box(More)
In the visual world paradigm, participants are more likely to fixate a visual referent that has some semantic relationship with a heard word, than they are to fixate an unrelated referent [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language. A new methodology for the real-time investigation of speech perception, memory, and(More)
When an object is described as changing state during an event, do the representations of those states compete? The distinct states they represent cannot coexist at any one moment in time, yet each representation must be retrievable at the cost of suppressing the other possible object states. We used functional magnetic resonance imaging of human(More)