Learn More
Can we decipher speech content ("what" is being said) and speaker identity ("who" is saying it) from observations of brain activity of a listener? Here, we combine functional magnetic resonance imaging with a data-mining algorithm and retrieve what and whom a person is listening to from the neural fingerprints that speech and voice signals elicit in the(More)
The premise of cognitive therapy is that one can overcome the irresistible temptation of highly palatable foods by actively restructuring the way one thinks about food. Testing this idea, participants in the present study were instructed to passively view foods, up-regulate food palatability thoughts, apply cognitive reappraisal (e.g., thinking about health(More)
In speech perception, extraction of meaning from complex streams of sounds is surprisingly fast and efficient. By tracking the neural time course of syllable processing with magnetoencephalography we show that this continuous construction of meaning-based representations is aided by both top-down (context-based) expectations and bottom-up(More)
Recently brain imaging evidence indicated that letter/speech-sound integration, necessary for establishing fluent reading, takes place in auditory association areas and that the integration is influenced by stimulus onset asynchrony (SOA) between the letter and the speech-sound. In the present study, we used a specific ERP measure known for its automatic(More)
There is an increasing interest to integrate electrophysiological and hemodynamic measures for characterizing spatial and temporal aspects of cortical processing. However, an informative combination of responses that have markedly different sensitivities to the underlying neural activity is not straightforward, especially in complex cognitive tasks. Here,(More)
Constructive mechanisms in the auditory system may restore a fragmented sound when a gap in this sound is rendered inaudible by noise to yield a continuity illusion. Using combined psychoacoustic and electroencephalography experiments in humans, we found that the sensory-perceptual mechanisms that enable restoration suppress auditory cortical encoding of(More)
OBJECTIVE We investigated event-related potential (ERP) correlates of developmental changes in spoken word recognition during early school years. We focused on implicit processing of word onsets as this may change considerably due to vocabulary growth and reading acquisition. METHODS Subjects were pre-schoolers (5-6 years), beginning readers (7-8 years)(More)
Speech and vocal sounds are at the core of human communication. Cortical processing of these sounds critically depends on behavioral demands. However, the neurocomputational mechanisms enabling this adaptive processing remain elusive. Here we examine the task-dependent reorganization of electroencephalographic responses to natural speech sounds (vowels /a/,(More)
Selective attention to relevant sound properties is essential for everyday listening situations. It enables the formation of different perceptual representations of the same acoustic input and is at the basis of flexible and goal-dependent behavior. Here, we investigated the role of the human auditory cortex in forming behavior-dependent representations of(More)
Pattern recognition algorithms are becoming increasingly used in functional neuroimaging. These algorithms exploit information contained in temporal, spatial, or spatio-temporal patterns of independent variables (features) to detect subtle but reliable differences between brain responses to external stimuli or internal brain states. When applied to the(More)