Learn More
This paper presents data concerning auditory evoked responses in the middle latency range (wave Pam/Pa) and slow latency range (wave N1m/N1) recorded from 12 subjects. It is the first group study to report multi-channel data of both MEG and EEG recordings from the human auditory cortex. The experimental procedure involved potential and current density(More)
A central issue in speech recognition is how contrastive phonemic information is stored in the mental lexicon. The conventional view assumes that this information is closely related to acoustic properties of speech. Considering that no world is ever pronounced alike twice and that the brain has limited capacities to manage information, an opposing view(More)
Transient and steady-state auditory evoked fields (AEFs) to brief tone pips were recorded over the left hemisphere at 7 different stimulus rates (0.125-39 Hz) using a 37-channel biomagnetometer. Previous observations of transient auditory gamma band response (GBR) activity were replicated. Similar rate characteristics and equivalent dipole locations(More)
Sustained magnetic and electric brain waves may reflect linguistic processing when elicited by auditory speech stimuli. In the present study, only in the latency interval subsequent to the N1m/N1 has a sensitivity of brain responses to features of speech been demonstrated. We conclude this from studying the auditory-evoked magnetic field (AEF) and the(More)
We studied neuromagnetic correlates of the processing of German vowels [a], [e] and [i]. The aim was (i) to show an influence of acoustic/phonetic features on timing and mapping of the N100 m component and (ii) to demonstrate the retest reliability of these parameters. To assess the spatial configuration of the N100 m generators, Euclidean distances between(More)
BACKGROUND The speech signal contains both information about phonological features such as place of articulation and non-phonological features such as speaker identity. These are different aspects of the 'what'-processing stream (speaker vs. speech content), and here we show that they can be further segregated as they may occur in parallel but within(More)
This study further elucidates determinants of vowel perception in the human auditory cortex. The vowel inventory of a given language can be classified on the basis of phonological features which are closely linked to acoustic properties. A cortical representation of speech sounds based on these phonological features might explain the surprisingly inverse(More)
A sound lasting for several seconds is known to elicit a baseline shift in electrical and magnetic records. We have studied the dependence of the magnetic field distribution of this "per-stimulatory" sustained field (SF) on tone frequency. Tone bursts of 2 sec duration and 60 dB nHL intensity were presented to 11 subjects at varying interstimulus intervals(More)
How is it that the human brain is capable of making sense from speech under many acoustically compromised conditions? The support through top-down knowledge is inevitable but can we identify brain measures of this matching process between degraded auditory input and possible meaning? To answer these questions, the present study investigated the modulation(More)
How does the mental lexicon cope with phonetic variants in recognition of spoken words? Using a lexical decision task with and without fragment priming, the authors compared the processing of German words and pseudowords that differed only in the place of articulation of the initial consonant (place). Across both experiments, event-related brain potentials(More)