Learn More
To assess semantic processing of iconic gestures, EEG (29 scalp sites) was recorded as adults watched cartoon segments paired with soundless videos of congruous and incongruous gestures followed by probe words. Event-related potentials time-locked to the onset of gestures and probe words were measured in two experiments. In Experiment 1, participants judged(More)
Two studies tested the hypothesis that the right hemisphere engages in relatively coarse semantic coding that aids high-level language tasks such as joke comprehension. Scalprecorded event-related brain potentials (ERPs) were collected as healthy adults read probe words (CRAZY) preceded either by jokes or nonfunny controls ("Everyone had so much fun jumping(More)
EEG was recorded as adults watched short segments of spontaneous discourse in which the speaker's gestures and utterances contained complementary information. Videos were followed by one of four types of picture probes: cross-modal related probes were congruent with both speech and gestures; speech-only related probes were congruent with information in the(More)
To assess priming by iconic gestures, we recorded EEG (at 29 scalp sites) in two experiments while adults watched short, soundless videos of spontaneously produced, cospeech iconic gestures followed by related or unrelated probe words. In Experiment 1, participants classified the relatedness between gestures and words. In Experiment 2, they attended to(More)
and Philosophy, all who share an interest in language. We feature papers related to language and cognition (distributed via the World Wide Web) and welcome response from friends and colleagues at UCSD as well as other institutions. Please visit our web site at If you know of others who would be interested in receiving the newsletter, you may add them to our(More)
Electroencephalogram was recorded as healthy adults viewed short videos of spontaneous discourse in which a speaker used depictive gestures to complement information expressed through speech. Event-related potentials were computed time-locked to content words in the speech stream and to subsequent related and unrelated picture probes. Gestures modulated(More)
Conversation is multi-modal, involving both talk and gesture. Does understanding depictive gestures engage processes similar to those recruited in the comprehension of drawings or photographs? Event-related brain potentials (ERPs) were recorded from neurotypical adults as they viewed spontaneously produced depictive gestures preceded by congruent and(More)
Working memory (WM) models have traditionally assumed at least two domain-specific storage systems for verbal and visuo-spatial information. We review data that suggest the existence of an additional slave system devoted to the temporary storage of body movements, and present a novel instrument for its assessment: the movement span task. The movement span(More)
Multi-modal discourse comprehension requires speakers to combine information from speech and gestures. To date, little research has addressed the cognitive resources that underlie these processes. Here we used a dual task paradigm to test the relative importance of verbal and visuo-spatial working memory in speech-gesture comprehension. Healthy,(More)
Three experiments tested the role of verbal versus visuo-spatial working memory in the comprehension of co-speech iconic gestures. In Experiment 1, participants viewed congruent discourse primes in which the speaker's gestures matched the information conveyed by his speech, and incongruent ones in which the semantic content of the speaker's gestures(More)