Learn More
BACKGROUND Integrating information from the different senses markedly enhances the detection and identification of external stimuli. Compared with unimodal inputs, semantically and/or spatially congruent multisensory cues speed discrimination and improve reaction times. Discordant inputs have the opposite effect, reducing performance and slowing responses.(More)
Watching a speaker's lips during face-to-face conversation (lipreading) markedly improves speech perception, particularly in noisy conditions. With functional magnetic resonance imaging it was found that these linguistic visual cues are sufficient to activate auditory cortex in normal hearing individuals in the absence of auditory speech sounds. Two further(More)
In order to understand the evolution of human language, it is necessary to explore the neural systems that support language processing in its many forms. In particular, it is informative to separate those mechanisms that may have evolved for sensory processing (hearing) from those that have evolved to represent events and actions symbolically (language). To(More)
BACKGROUND We assessed motion processing in a group of high functioning children with autism and a group of typically developing children, using a coherent motion detection task. METHOD Twenty-five children with autism (mean age 11 years, 8 months) and 22 typically developing children matched for non-verbal mental ability and chronological age were(More)
Can the cortical substrates for the perception of face actions be distinguished when the superficial visual qualities of these actions are very similar? Two fMRI experiments are reported. Compared with watching the face at rest, observing silent speech was associated with bilateral activation in a number of temporal cortical regions, including the superior(More)
Integrating information across the senses can enhance our ability to detect and classify stimuli in the environment. For example, auditory speech perception is substantially improved when the speaker's face is visible. In an fMRI study designed to investigate the neural mechanisms underlying these crossmodal behavioural gains, bimodal (audio-visual) speech(More)
Speech is perceived both by ear and by eye. Unlike heard speech, some seen speech gestures can be captured in stilled image sequences. Previous studies have shown that in hearing people, natural time-varying silent seen speech can access the auditory cortex (left superior temporal regions). Using functional magnetic resonance imaging (fMRI), the present(More)
Most of our knowledge about the neurobiological bases of language comes from studies of spoken languages. By studying signed languages, we can determine whether what we have learnt so far is characteristic of language per se or whether it is specific to languages that are spoken and heard. Overwhelmingly, lesion and neuroimaging studies indicate that the(More)
In a previous study we used functional magnetic resonance imaging (fMRI) to demonstrate activation in auditory cortex during silent speechreading. Since image acquisition during fMRI generates acoustic noise, this pattern of activation could have reflected an interaction between background scanner noise and the visual lip-read stimuli. In this study we(More)
In all signed languages used by deaf people, signs are executed in "sign space" in front of the body. Some signed sentences use this space to map detailed "real-world" spatial relationships directly. Such sentences can be considered to exploit sign space "topographically." Using functional magnetic resonance imaging, we explored the extent to which(More)