Learn More
OBJECTIVE To reduce stimulus transduction artifacts in EEG while using insert earphones. DESIGN Reference Equivalent Threshold SPLs were assessed for Etymotic ER-4B earphones in 15 volunteers. Auditory brainstem responses (ABRs) and middle latency responses (MLRs)-as well as long-duration complex ABRs-to click and /dα/ speech stimuli were recorded in a(More)
The brain uses context and prior knowledge to repair degraded sensory inputs and improve perception. For example, listeners hear speech continuing uninterrupted through brief noises, even if the speech signal is artificially removed from the noisy epochs. In a functional MRI study, we show that this temporal filling-in process is based on two dissociable(More)
In noisy environments, listeners tend to hear a speaker's voice yet struggle to understand what is said. The most effective way to improve intelligibility in such conditions is to watch the speaker's mouth movements. Here we identify the neural networks that distinguish understanding from merely hearing speech, and determine how the brain applies visual(More)
The human brain uses acoustic cues to decompose complex auditory scenes into its components. For instance to improve communication, a listener can select an individual "stream," such as a talker in a crowded room, based on cues such as pitch or location. Despite numerous investigations into auditory streaming, few have demonstrated clear correlates of(More)
BACKGROUND Segregating auditory scenes into distinct objects or streams is one of our brain's greatest perceptual challenges. Streaming has classically been studied with bistable sound stimuli, perceived alternately as a single group or two separate groups. Throughout the last decade different methodologies have yielded inconsistent evidence about the role(More)
Locating sounds in realistic scenes is challenging because of distracting echoes and coarse spatial acoustic estimates. Fortunately, listeners can improve performance through several compensatory mechanisms. For instance, their brains perceptually suppress short latency (1-10 ms) echoes by constructing a representation of the acoustic environment in a(More)
Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends(More)
27 Auditory spatial perception plays a critical role in day-today communication. For instance, 28 listeners utilize acoustic spatial information to segregate individual talkers into distinct auditory 29 " streams " to improve speech intelligibility. However, spatial localization is an exceedingly 30 difficult task in everyday listening environments with(More)
21 22 2 Précis 1 This technical note documents three approaches to the problem of stimulus transduction artifact 2 during EEG measurement. We use insert earphones (Etymotic ER-4B) for reproduction of high 3 frequency, high-fidelity acoustical information, and a non-specialized digital EEG acquisition 4 system. Such standard EEG equipment has considerable(More)