Sonja Schall

Learn More
Disparate sensory streams originating from a common underlying event share similar dynamics, and this plays an important part in multisensory integration. Here we investigate audiovisual binding by presenting continuously changing, temporally congruent and incongruent stimuli. Recorded EEG signals are used to quantify spectrotemporal and waveform locking of(More)
A PC-based ultrasound data acquisition system has been developed which uses compound scanning techniques to image a residual limb in a water tank. From the received ultrasonic eco data, the system produces cross-sectional images and reconstructs a three-dimensional (3-D) model of the limb. A commercial software for computer-aided prosthetic socket design(More)
How do we recognize people that are familiar to us? There is overwhelming evidence that our brains process voice and face in a combined fashion to optimally recognize both who is speaking and what is said. Surprisingly, this combined processing of voice and face seems to occur even if one stream of information is missing. For example, if subjects only hear(More)
Electro-oculogram (EOG) measurements were obtained on seven patients with moderately extensive lesions of fundus flavimaculatus. Each patient was tested by two different methods of measuring EOG light-peak to dark-trough ratios. The results show that erroneously low ratios can be observed unless a diffusing sphere is employed in the determination of(More)
The human voice is the primary carrier of speech but also a fingerprint for person identity. Previous neuroimaging studies have revealed that speech and identity recognition is accomplished by partially different neural pathways, despite the perceptual unity of the vocal sound. Importantly, the right STS has been implicated in voice processing, with(More)
The physiologic correlates of ejection sounds have been studied by simultaneous phonocardiograms, echocardiograms and high fidelity pressure tracings. Ejection sounds associated with semilunar valve stenosis or hypertension of the systemic or pulmonary circulation occur at the moment of complete opening of the aortic or pulmonary valve recorded(More)
It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis(More)
Our di erent senses enable us to capture qualitatively di erent aspects of our surroundings. The temporal dynamics of input from different modalities provide important cues for the uni ed perception of a multisensory event. There is evidence from cat visual cortex indicating that LFP power locks to the temporal pro le of a visual stimulus [11]. Does(More)
Face and voice of a person are strongly associated with each other and usually perceived as a single entity. Despite the natural co-occurrence of faces and voices, brain research has traditionally approached their perception from a unisensory perspective. This means that research into face perception has exclusively focused on the visual system, while(More)
  • 1