Learn More
The human ability to continuously track dynamic environmental stimuli, in particular speech, is proposed to profit from "entrainment" of endogenous neural oscillations, which involves phase reorganization such that "optimal" phase comes into line with temporally expected critical events, resulting in improved processing. The current experiment goes beyond(More)
In a recent " Perspective " article (Giraud and Poeppel, 2012), Giraud and Poeppel lay out in admirable clarity how neural oscillations and, in particular, nested oscillations at different time scales, might enable the human brain to understand speech. They provide compelling evidence for " enslaving " of ongoing neural oscillations by slow fluctuations in(More)
How we measure time and integrate temporal cues from different sensory modalities are fundamental questions in neuroscience. Sensitivity to a "beat" (such as that routinely perceived in music) differs substantially between auditory and visual modalities. Here we examined beat sensitivity in each modality, and examined cross-modal influences, using(More)
Listeners show a remarkable ability to quickly adjust to degraded speech input. Here, we aimed to identify the neural mechanisms of such short-term perceptual adaptation. In a sparse-sampling, cardiac-gated functional magnetic resonance imaging (fMRI) acquisition, human listeners heard and repeated back 4-band-vocoded sentences (in which the temporal(More)
Modality effects in rhythm processing were examined using a tempo judgment paradigm, in which participants made speeding-up or slowing-down judgments for auditory and visual sequences. A key element of stimulus construction was that the expected pattern of tempo judgments for critical test stimuli depended on a beat-based encoding of the sequence. A(More)
Our sensory environment is teeming with complex rhythmic structure, to which neural oscillations can become synchronized. Neural synchronization to environmental rhythms (entrainment) is hypothesized to shape human perception, as rhythmic structure acts to temporally organize cortical excitability. In the current human electroencephalography study, we(More)
Meaningful auditory stimuli such as speech and music often vary simultaneously along multiple time scales. Thus, listeners must selectively attend to, and selectively ignore, separate but intertwined temporal features. The current study aimed to identify and characterize the neural network specifically involved in this feature-selective attention to time.(More)
Noise-vocoded speech is a spectrally highly degraded signal, but it preserves the temporal envelope of speech. Listeners vary considerably in their ability to adapt to this degraded speech signal. Here, we hypothesised that individual differences in adaptation to vocoded speech should be predictable by non-speech auditory, cognitive, and neuroanatomical(More)
Enhanced alpha power compared with a baseline can reflect states of increased cognitive load, for example, when listening to speech in noise. Can knowledge about "when" to listen (temporal expectations) potentially counteract cognitive load and concomitantly reduce alpha? The current magnetoencephalography (MEG) experiment induced cognitive load using an(More)
This article extends an imputed pitch velocity model of the auditory kappa effect proposed by Henry and McAuley (2009a) to the auditory tau effect. Two experiments were conducted using an AXB design in which listeners judged the relative pitch of a middle target tone (X) in ascending and descending three-tone sequences. In Experiment 1, sequences were(More)