Learn More
Acoustic sequences such as speech and music are generally perceived as coherent auditory "streams," which can be individually attended to and followed over time. Although the psychophysical stimulus parameters governing this "auditory streaming" are well established, the brain mechanisms underlying the formation of auditory streams remain largely unknown.(More)
Often, the sound arriving at the ears is a mixture from many different sources, but only 1 is of interest. To assist with selection, the auditory system structures the incoming input into streams, each of which ideally corresponds to a single source. Some authors have argued that this process of streaming is automatic and invariant, but recent evidence(More)
Existing cochlear implants stimulate the auditory nerve with trains of symmetric biphasic (BP) pulses. Recent data have shown that modifying the pulse shape, while maintaining charge balance, may be beneficial in terms of reducing power consumption, increasing dynamic range, and limiting channel interactions. We measured thresholds and most comfortable(More)
A common finding in the cochlear implant literature is that the upper limit of rate discrimination on a single channel is about 300 pps. The present study investigated rate discrimination using a procedure in which, in each block of two-interval trials, the standard could have one of the five baseline rates (100, 200, 300, 400, and 500 pps) and the signal(More)
A striking feature of human perception is that our subjective experience depends not only on sensory information from the environment but also on our prior knowledge or expectations. The precise mechanisms by which sensory information and prior knowledge are integrated remain unclear, with longstanding disagreement concerning whether integration is strictly(More)
Speech comprehension is resistant to acoustic distortion in the input, reflecting listeners' ability to adjust perceptual processes to match the speech input. This adjustment is reflected in improved comprehension of distorted speech with experience. For noise vocoding, a manipulation that removes spectral detail from speech, listeners' word report showed a(More)
Nearly 100,000 deaf patients worldwide have had their hearing restored by a cochlear implant (CI) fitted to one ear. However, although many patients understand speech well in quiet, even the most successful experience difficulty in noisy situations. In contrast, normal-hearing (NH) listeners achieve improved speech understanding in noise by processing the(More)
Speech recognition in noise improves with combined acoustic and electric stimulation compared to electric stimulation alone [Kong et al., J. Acoust. Soc. Am. 117, 1351-1361 (2005)]. Here the contribution of fundamental frequency (F0) and low-frequency phonetic cues to speech recognition in combined hearing was investigated. Normal-hearing listeners heard(More)
Three experiments studied auditory streaming using sequences of alternating "ABA" triplets, where "A" and "B" were 50-ms tones differing in frequency by Δf semitones and separated by 75-ms gaps. Experiment 1 showed that detection of a short increase in the gap between a B tone and the preceding A tone, imposed on one ABA triplet, was better when the delay(More)
A phenomenological dual-process model of the electrically stimulated human auditory nerve is presented and compared to threshold and loudness data from cochlear implant users. The auditory nerve is modeled as two parallel processes derived from linearized equations of conductance-based models. The first process is an integrator, which dominates stimulation(More)