Robert P. Carlyon

Learn More
Two pairs of experiments studied the effects of attention and of unilateral neglect on auditory streaming. The first pair showed that the build up of auditory streaming in normal participants is greatly reduced or absent when they attend to a competing task in the contralateral ear. It was concluded that the effective build up of streaming depends on(More)
Acoustic sequences such as speech and music are generally perceived as coherent auditory "streams," which can be individually attended to and followed over time. Although the psychophysical stimulus parameters governing this "auditory streaming" are well established, the brain mechanisms underlying the formation of auditory streams remain largely unknown.(More)
In everyday life we often listen to one sound, such as someone's voice, in a background of competing sounds. To do this, we must assign simultaneously occurring frequency components to the correct source, and organize sounds appropriately over time. The physical cues that we exploit to do so are well-established; more recent research has focussed on the(More)
Fourteen twin pairs, aged 8 to 10 years, were tested 3 times over 12 months; they included 11 children with language impairment (LI), 11 control children matched on nonverbal ability and age, and 6 co-twins who did not meet criteria for LI or control status. Thresholds were estimated for detecting a brief backward-masked tone (BM), detection of frequency(More)
Often, the sound arriving at the ears is a mixture from many different sources, but only 1 is of interest. To assist with selection, the auditory system structures the incoming input into streams, each of which ideally corresponds to a single source. Some authors have argued that this process of streaming is automatic and invariant, but recent evidence(More)
A series of experiments investigated the influence of harmonic resolvability on the pitch of, and the discriminability of differences in fundamental frequency (F0) between, frequency-modulated (FM) harmonic complexes. Both F0 (62.5 to 250 Hz) and spectral region (LOW: 125-625 Hz, MID: 1375-1875 Hz, and HIGH: 3900-5400 Hz) were varied orthogonally. The(More)
Speech recognition in noise improves with combined acoustic and electric stimulation compared to electric stimulation alone [Kong et al., J. Acoust. Soc. Am. 117, 1351-1361 (2005)]. Here the contribution of fundamental frequency (F0) and low-frequency phonetic cues to speech recognition in combined hearing was investigated. Normal-hearing listeners heard(More)
A striking feature of human perception is that our subjective experience depends not only on sensory information from the environment but also on our prior knowledge or expectations. The precise mechanisms by which sensory information and prior knowledge are integrated remain unclear, with longstanding disagreement concerning whether integration is strictly(More)
Speech comprehension is resistant to acoustic distortion in the input, reflecting listeners' ability to adjust perceptual processes to match the speech input. This adjustment is reflected in improved comprehension of distorted speech with experience. For noise vocoding, a manipulation that removes spectral detail from speech, listeners' word report showed a(More)
Auditory streaming refers to the perceptual parsing of acoustic sequences into "streams", which makes it possible for a listener to follow the sounds from a given source amidst other sounds. Streaming is currently regarded as an important function of the auditory system in both humans and animals, crucial for survival in environments that typically contain(More)