Jeroen J. Stekelenburg

Learn More
Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were modulated by phonetic audiovisual congruency. In order to(More)
The present study investigated the neural correlates of perceiving human bodies. Focussing on the N170 as an index of structural encoding, we recorded event-related potentials (ERPs) to images of bodies and faces (either neutral or expressing fear) and objects, while subjects viewed the stimuli presented either upright or inverted. The N170 was enhanced and(More)
The neural activity of speech sound processing (the N1 component of the auditory ERP) can be suppressed if a speech sound is accompanied by concordant lip movements. Here we demonstrate that this audiovisual interaction is neither speech specific nor linked to humanlike actions but can be observed with artificial stimuli if their timing is made predictable.(More)
A question that has emerged over recent years is whether audiovisual (AV) speech perception is a special case of multi-sensory perception. Electrophysiological (ERP) studies have found that auditory neural activity (N1 component of the ERP) induced by speech is suppressed and speeded up when a speech sound is accompanied by concordant lip movements. In(More)
BACKGROUND In many natural audiovisual events (e.g., the sight of a face articulating the syllable /ba/), the visual signal precedes the sound and thus allows observers to predict the onset and the content of the sound. In healthy adults, the N1 component of the event-related brain potential (ERP), reflecting neural activity associated with basic sound(More)
Observing facial expressions automatically prompts imitation, as can be seen with facial electromyography. To investigate whether this reaction is driven by automatic mimicry or by recognition of the emotion displayed we recorded electromyograph responses to presentations of facial expressions, face-voice combinations and bodily expressions, which resulted(More)
The ventriloquist illusion arises when sounds are mislocated towards a synchronous but spatially discrepant visual event. Here, we investigated the ventriloquist illusion at a neurophysiological level. The question was whether an illusory shift in sound location was reflected in the auditory mismatch negativity (MMN). An 'oddball' paradigm was used whereby(More)
Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired ("unity assumption"). Participants made temporal order(More)
Traditional emotion theories stress the importance of the face in the expression of emotions but bodily expressions are becoming increasingly important as well. In these experiments we tested the hypothesis that similar physiological responses can be evoked by observing emotional face and body signals and that the reaction to angry signals is amplified in(More)
We receive emotional signals from different sources, including the face, the whole body, and the natural scene. Previous research has shown the importance of context provided by the whole body and the scene on the recognition of facial expressions. This study measured physiological responses to face-body-scene combinations. Participants freely viewed(More)