Learn More
Observing a speaker's mouth profoundly influences speech perception. For example, listeners perceive an "illusory" "ta" when the video of a face producing /ka/ is dubbed onto an audio /pa/. Here, we show how cortical areas supporting speech production mediate this illusory percept and audiovisual (AV) speech perception more generally. Specifically, cortical(More)
Speech-associated gestures are hand and arm movements that not only convey semantic information to listeners but are themselves actions. Broca's area has been assumed to play an important role both in semantic retrieval or selection (as part of a language comprehension system) and in action recognition (as part of a "mirror" or "observation-execution(More)
Neurophysiological research suggests that understanding the actions of others harnesses neural circuits that would be used to produce those actions directly. We used fMRI to examine brain areas active during language comprehension in which the speaker was seen and heard while talking (audiovisual) or heard but not seen (audio-alone) or when the speaker was(More)
Is there a neural representation of speech that transcends its sensory properties? Using fMRI, we investigated whether there are brain areas where neural activity during observation of sublexical audiovisual input corresponds to a listener's speech percept (what is "heard") independent of the sensory properties of the input. A target audiovisual stimulus(More)
comprehensive account of how speech is perceived should encompass audiovisual speech perception. The ability to see as well as hear has to be integral to the design, not merely a retro-fitted afterthought. Summerfield (1987) 8.1 The " lack of invariance problem " and multisensory speech perception In speech there is a many-to-many mapping between acoustic(More)
Everyday communication is accompanied by visual information from several sources, including co-speech gestures, which provide semantic information listeners use to help disambiguate the speaker's message. Using fMRI, we examined how gestures influence neural activity in brain regions associated with processing semantic information. The BOLD response was(More)
Of several thousand peptides presented by the major histocompatibility molecule HLA-A2.1, at least nine are recognized by melanoma-specific cytotoxic T lymphocytes (CTLs). Tandem mass spectrometry was used to identify and to sequence one of these peptide epitopes. Melanoma-specific CTLs had an exceptionally high affinity for this nine-residue peptide, which(More)
Although the linguistic structure of speech provides valuable communicative information, nonverbal behaviors can offer additional, often disambiguating cues. In particular, being able to see the face and hand movements of a speaker facilitates language comprehension [1]. But how does the brain derive meaningful information from these movements? Mouth(More)
The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum(More)
Functional magnetic resonance imaging (fMRI) studies of speech sound categorization often compare conditions in which a stimulus is presented repeatedly to conditions in which multiple stimuli are presented. This approach has established that a set of superior temporal and inferior parietal regions respond more strongly to conditions containing stimulus(More)