Learn More
OBJECTIVES Perception-in-noise deficits have been demonstrated across many populations and listening conditions. Many factors contribute to successful perception of auditory stimuli in noise, including neural encoding in the central auditory system. Physiological measures such as cortical auditory-evoked potentials (CAEPs) can provide a view of neural(More)
Tasks assessing perception of a phonemic contrast based on voice onset time (VOT) and a nonspeech analog of a VOT contrast using tone onset time (TOT) were administered to children (ages 7.5 to 15.9 years) identified as having reading disability (RD; n = 21), attention deficit/hyperactivity disorder (ADHD; n = 22), comorbid RD and ADHD (n = 26), or no(More)
Listeners with normal-hearing sensitivity recognize speech more accurately in the presence of fluctuating background sounds, such as a single competing voice, than in unmodulated noise at the same overall level. These performance differences are greatly reduced in listeners with hearing impairment, who generally receive little benefit from fluctuations in(More)
OBJECTIVE To investigate the contributions of energetic and informational masking to neural encoding and perception in noise, using oddball discrimination and sentence recognition tasks. DESIGN P3 auditory evoked potential, behavioral discrimination, and sentence recognition data were recorded in response to speech and tonal signals presented to nine(More)
The purpose of this study was to determine whether the perceived sensory dissonance of pairs of pure tones (PT dyads) or pairs of harmonic complex tones (HC dyads) is altered due to sensorineural hearing loss. Four normal-hearing (NH) and four hearing-impaired (HI) listeners judged the sensory dissonance of PT dyads geometrically centered at 500 and 2000(More)
Event-related magnetic fields (ERFs) were recorded from the left hemisphere in nine normal volunteers in response to four consonant-vowel (CV) syllables varying in voice-onset time (VOT). CVs with VOT values of 0 and +20 ms were perceived as /ga/ and those with VOT values of +40 and +60 ms as /ka/. Results showed: (1) a displacement of the N1m peak(More)
Twelve male listeners categorized 54 synthetic vowel stimuli that varied in second and third formant frequency on a Bark scale into the American English vowel categories [see text]. A neuropsychologically plausible model of categorization in the visual domain, the Striatal Pattern Classifier (SPC; Ashby & Waldron, 1999), is generalized to the auditory(More)
OBJECTIVE Recent studies indicate that high-frequency amplification may provide little benefit for listeners with moderate-to-severe high-frequency hearing loss, and may even reduce speech recognition. Moore and colleagues have proposed a direct link between this lack of benefit and the presence of regions of nonfunctioning inner hair cells (dead regions)(More)
Older listeners are more likely than younger listeners to have difficulties in making temporal discriminations among auditory stimuli presented to one or both ears. In addition, the performance of older listeners is often observed to be more variable than that of younger listeners. The aim of this work was to relate age and hearing loss to temporal(More)
There is a long-standing debate concerning the efficacy of formant-based versus whole spectrum models of vowel perception. Categorization data for a set of synthetic steady-state vowels were used to evaluate both types of models. The models tested included various combinations of formant frequencies and amplitudes, principal components derived from(More)