Learn More
Two visual-world eyetracking experiments were conducted to investigate whether, how, and when syntactic and semantic constraints are integrated and used to predict properties of subsequent input. Experiment 1 contrasted auditory German constructions such as, "The hare-nominative eats ... (the cabbage-acc)" versus "The hare-accusative eats ... (the(More)
When participants are presented simultaneously with spoken language and a visual display depicting objects to which that language refers, participants spontaneously fixate the visual referents of the words being heard [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of(More)
Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants generated(More)
In the visual world paradigm, participants are more likely to fixate a visual referent that has some semantic relationship with a heard word, than they are to fixate an unrelated referent [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language. A new methodology for the real-time investigation of speech perception, memory, and(More)
When an object is described as changing state during an event, do the representations of those states compete? The distinct states they represent cannot coexist at any one moment in time, yet each representation must be retrievable at the cost of suppressing the other possible object states. We used functional magnetic resonance imaging of human(More)
Understanding events often requires recognizing unique stimuli as alternative, mutually exclusive states of the same persisting object. Using fMRI, we examined the neural mechanisms underlying the representation of object states and object-state changes. We found that subjective ratings of visual dissimilarity between a depicted object and an unseen(More)
Individual differences in children's online language processing were explored by monitoring their eye movements to objects in a visual scene as they listened to spoken sentences. Eleven skilled and 11 less-skilled comprehenders were presented with sentences containing verbs that were either neutral with respect to the visual context (e.g., Jane watched her(More)
Three eye-tracking experiments investigated the influence of stored colour knowledge, perceived surface colour, and conceptual category of visual objects on language-mediated overt attention. Participants heard spoken target words whose concepts are associated with a diagnostic colour (e.g., "spinach"; spinach is typically green) while their eye movements(More)
An auditory sentence comprehension task investigated the extent to which the integration of contextual and structural cues was mediated by verbal memory span with 32 English-speaking six- to eight-year-old children. Spoken relative clause sentences were accompanied by visual context pictures which fully (depicting the actions described within the relative(More)
Language is more than a source of information for accessing higher-order conceptual knowledge. Indeed, language may determine how people perceive and interpret visual stimuli. Visual processing in linguistic contexts, for instance, mirrors language processing and happens incrementally, rather than through variously-oriented fixations over a particular(More)