Learn More
During acoustic communication among human beings, emotional information can be expressed both by the propositional content of verbal utterances and by the modulation of speech melody (affective prosody). It is well established that linguistic processing is bound predominantly to the left hemisphere of the brain. By contrast, the encoding of emotional(More)
In addition to the propositional content of verbal utterances, significant linguistic and emotional information is conveyed by the tone of speech. To differentiate brain regions subserving processing of linguistic and affective aspects of intonation, discrimination of sentences differing in linguistic accentuation and emotional expressiveness was evaluated(More)
BACKGROUND There are few data on the cerebral organization of motor aspects of speech production and the pathomechanisms of dysarthric deficits subsequent to brain lesions and diseases. The authors used fMRI to further examine the neural basis of speech motor control. METHODS AND RESULTS In eight healthy volunteers, fMRI was performed during syllable(More)
The central auditory system of the human brain uses a variety of mechanisms to analyze auditory scenes, among others, preattentive detection of sudden changes in the sound environment. Electroencephalography (EEG) and magnetoencephalography (MEG) provide a measure to monitor neuronal cortical currents. The mismatch negativity (MMN) or field (MMNm) reflect(More)
Auditory pattern changes have been shown to elicit increases in magnetoencephalographic gamma-band activity (GBA) over left inferior frontal cortex, forming part of the putative auditory ventral "what" processing stream. The present study employed a McGurk-type paradigm to assess whether GBA would be associated with subjectively perceived changes even when(More)
Voiced and unvoiced sounds, characterized by a periodic or aperiodic acoustic structure, respectively, represent two basic information-bearing elements of the speech signal. Using whole-head magnetencephalography (MEG), magnetic fields (M50/M100) in response to synthetic vowel-like as well as noise-like signals matched in spectral envelope were recorded in(More)
In eight patients with a purely ataxic syndrome due to cerebellar atrophy the voice onset time (VOT) of word-initial stop consonants was measured at the acoustic signal. The subjects had been asked to produce sentence utterances including either one of the German minimal pair cognates "Daten" (/daten/, "data") and "Taten" (/taten/, "deeds"). In addition, a(More)
The neural mechanisms of auditory distance perception, a function of great biological importance, are poorly understood. Where not overruled by conflicting factors such as echoes or visual input, sound intensity is perceived as conveying distance information. We recorded neuromagnetic responses to amplitude variations over both supratemporal planes, with(More)
In less than three decades, the concept “cerebellar neurocognition” has evolved from a mere afterthought to an entirely new and multifaceted area of neuroscientific research. A close interplay between three main strands of contemporary neuroscience induced a substantial modification of the traditional view of the cerebellum as a mere coordinator of(More)
Keele and Ivry (1991) considered the cerebellum an "internal clock" responsible for temporal computations both in the motor and in the perceptual domain. These authors, therefore, expected that the processing of durational parameters of the perceived acoustic speech signal such as voice onset time (VOT) depends upon the cerebellum as well. However, a(More)