Learn More
BACKGROUND Integrating information from the different senses markedly enhances the detection and identification of external stimuli. Compared with unimodal inputs, semantically and/or spatially congruent multisensory cues speed discrimination and improve reaction times. Discordant inputs have the opposite effect, reducing performance and slowing responses.(More)
To contribute to an understanding of the roles and mechanisms of action of Wnts in early vertebrate development, we have characterized the normal expression of Xenopus laevis Wnt-5A, and investigated the consequences of misexpression of this putative signalling factor. Xwnt-5A transcripts are expressed throughout development, and are enriched in both the(More)
BACKGROUND We assessed motion processing in a group of high functioning children with autism and a group of typically developing children, using a coherent motion detection task. METHOD Twenty-five children with autism (mean age 11 years, 8 months) and 22 typically developing children matched for non-verbal mental ability and chronological age were(More)
Integrating information across the senses can enhance our ability to detect and classify stimuli in the environment. For example, auditory speech perception is substantially improved when the speaker's face is visible. In an fMRI study designed to investigate the neural mechanisms underlying these crossmodal behavioural gains, bimodal (audio-visual) speech(More)
In all signed languages used by deaf people, signs are executed in "sign space" in front of the body. Some signed sentences use this space to map detailed "real-world" spatial relationships directly. Such sentences can be considered to exploit sign space "topographically." Using functional magnetic resonance imaging, we explored the extent to which(More)
Speech is perceived both by ear and by eye. Unlike heard speech, some seen speech gestures can be captured in stilled image sequences. Previous studies have shown that in hearing people, natural time-varying silent seen speech can access the auditory cortex (left superior temporal regions). Using functional magnetic resonance imaging (fMRI), the present(More)
One of the most commonly cited examples of human multisensory integration occurs during exposure to natural speech, when the vocal and the visual aspects of the signal are integrated in a unitary percept. Audiovisual association of facial gestures and vocal sounds has been demonstrated in nonhuman primates and in prelinguistic children, arguing for a(More)
In a previous study we used functional magnetic resonance imaging (fMRI) to demonstrate activation in auditory cortex during silent speechreading. Since image acquisition during fMRI generates acoustic noise, this pattern of activation could have reflected an interaction between background scanner noise and the visual lip-read stimuli. In this study we(More)
The term developmental prosopagnosia refers to an impairment in the recognition of familiar faces which has been present from birth in the absence of neurological disease or birth complications. The first reported study was by McConachie (1976, Cortex, 12: 76-82) and we report here a fifteen year follow-up on this case (AB). Recently developed theoretical(More)
We present an investigation of facial expression recognition by three people (BC, LP, and NC) with Mobius syndrome, a congenital disorder producing facial paralysis. The participants were asked to identify the emotion displayed in 10 examples of facial expressions associated with each of 6 basic emotions from the Ekman and Friesen (1976) series. None of the(More)