Jeroen J. Stekelenburg

Learn More
The present study investigated the neural correlates of perceiving human bodies. Focussing on the N170 as an index of structural encoding, we recorded event-related potentials (ERPs) to images of bodies and faces (either neutral or expressing fear) and objects, while subjects viewed the stimuli presented either upright or inverted. The N170 was enhanced and(More)
A question that has emerged over recent years is whether audiovisual (AV) speech perception is a special case of multi-sensory perception. Electrophysiological (ERP) studies have found that auditory neural activity (N1 component of the ERP) induced by speech is suppressed and speeded up when a speech sound is accompanied by concordant lip movements. In(More)
The neural activity of speech sound processing (the N1 component of the auditory ERP) can be suppressed if a speech sound is accompanied by concordant lip movements. Here we demonstrate that this audiovisual interaction is neither speech specific nor linked to humanlike actions but can be observed with artificial stimuli if their timing is made predictable.(More)
Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were modulated by phonetic audiovisual congruency. In order to(More)
Some elementary aspects of faces can be processed before cortical maturation or after lesion of primary visual cortex. Recent findings suggesting a role of an evolutionary ancient visual system in face processing have exploited the relative advantage of the temporal hemifield (nasal hemiretina). Here, we investigated whether under some circumstances face(More)
We investigated whether the interpretation of auditory stimuli as speech or non-speech affects audiovisual (AV) speech integration at the neural level. Perceptually ambiguous sine-wave replicas (SWS) of natural speech were presented to listeners who were either in 'speech mode' or 'non-speech mode'. At the behavioral level, incongruent lipread information(More)
BACKGROUND In many natural audiovisual events (e.g., the sight of a face articulating the syllable /ba/), the visual signal precedes the sound and thus allows observers to predict the onset and the content of the sound. In healthy adults, the N1 component of the event-related brain potential (ERP), reflecting neural activity associated with basic sound(More)
The ventriloquist illusion arises when sounds are mislocated towards a synchronous but spatially discrepant visual event. Here, we investigated the ventriloquist illusion at a neurophysiological level. The question was whether an illusory shift in sound location was reflected in the auditory mismatch negativity (MMN). An 'oddball' paradigm was used whereby(More)
The authors examined how principles of auditory grouping relate to intersensory pairing. Two sounds that normally enhance sensitivity on a visual temporal order judgement task (i.e. temporal ventriloquism) were embedded in a sequence of flanker sounds which either had the same or different frequency (Exp. 1), rhythm (Exp. 2), or location (Exp. 3). In all(More)
Observing facial expressions automatically prompts imitation, as can be seen with facial electromyography. To investigate whether this reaction is driven by automatic mimicry or by recognition of the emotion displayed we recorded electromyograph responses to presentations of facial expressions, face-voice combinations and bodily expressions, which resulted(More)