Learn More
The human ability to continuously track dynamic environmental stimuli, in particular speech, is proposed to profit from "entrainment" of endogenous neural oscillations, which involves phase reorganization such that "optimal" phase comes into line with temporally expected critical events, resulting in improved processing. The current experiment goes beyond(More)
In a recent " Perspective " article (Giraud and Poeppel, 2012), Giraud and Poeppel lay out in admirable clarity how neural oscillations and, in particular, nested oscillations at different time scales, might enable the human brain to understand speech. They provide compelling evidence for " enslaving " of ongoing neural oscillations by slow fluctuations in(More)
Modality effects in rhythm processing were examined using a tempo judgment paradigm, in which participants made speeding-up or slowing-down judgments for auditory and visual sequences. A key element of stimulus construction was that the expected pattern of tempo judgments for critical test stimuli depended on a beat-based encoding of the sequence. A(More)
Natural auditory stimuli are characterized by slow fluctuations in amplitude and frequency. However, the degree to which the neural responses to slow amplitude modulation (AM) and frequency modulation (FM) are capable of conveying independent time-varying information, particularly with respect to speech communication, is unclear. In the current(More)
How we measure time and integrate temporal cues from different sensory modalities are fundamental questions in neuroscience. Sensitivity to a "beat" (such as that routinely perceived in music) differs substantially between auditory and visual modalities. Here we examined beat sensitivity in each modality, and examined cross-modal influences, using(More)
Three experiments evaluated an imputed pitch velocity model of the auditory kappa effect. Listeners heard 3-tone sequences and judged the timing of the middle (target) tone relative to the timing of the 1st and 3rd (bounding) tones. Experiment 1 held pitch constant but varied the time (T) interval between bounding tones (T = 728, 1,000, or 1,600 ms) in(More)
Listeners show a remarkable ability to quickly adjust to degraded speech input. Here, we aimed to identify the neural mechanisms of such short-term perceptual adaptation. In a sparse-sampling, cardiac-gated functional magnetic resonance imaging (fMRI) acquisition, human listeners heard and repeated back 4-band-vocoded sentences (in which the temporal(More)
Meaningful auditory stimuli such as speech and music often vary simultaneously along multiple time scales. Thus, listeners must selectively attend to, and selectively ignore, separate but intertwined temporal features. The current study aimed to identify and characterize the neural network specifically involved in this feature-selective attention to time.(More)
Our sensory environment is teeming with complex rhythmic structure, to which neural oscillations can become synchronized. Neural synchronization to environmental rhythms (entrainment) is hypothesized to shape human perception, as rhythmic structure acts to temporally organize cortical excitability. In the current human electroencephalography study, we(More)
Enhanced alpha power compared with a baseline can reflect states of increased cognitive load, for example, when listening to speech in noise. Can knowledge about "when" to listen (temporal expectations) potentially counteract cognitive load and concomitantly reduce alpha? The current magnetoencephalography (MEG) experiment induced cognitive load using an(More)