Learn More
For all but the most profoundly hearing-impaired (HI) individuals, auditory-visual (AV) speech has been shown consistently to afford more accurate recognition than auditory (A) or visual (V) speech. However, the amount of AV benefit achieved (i.e., the superiority of AV performance in relation to unimodal performance) can differ widely across HI(More)
Factors leading to variability in auditory-visual (AV) speech recognition include the subject's ability to extract auditory (A) and visual (V) signal-related cues, the integration of A and V cues, and the use of phonological, syntactic, and semantic context. In this study, measures of A, V, and AV recognition of medial consonants in isolated nonsense(More)
Estimates of the ability to make use of sentence context in 34 postlingually hearing-impaired (HI) individuals were obtained using formulas developed by Boothroyd and Nittrouer [Boothroyd and Nittrouer, J. Acoust. Sco. Am. 84, 101-114 (1988)] which relate scores for isolated words to words in meaningful sentences. Sentence materials were constructed by(More)
Classic accounts of the benefits of speechreading to speech recognition treat auditory and visual channels as independent sources of information that are integrated fairly early in the speech perception process. The primary question addressed in this study was whether visible movements of the speech articulators could be used to improve the detection of(More)
OBJECTIVE Measures of listening effort can provide a useful complement to measures of listening performance. The purpose of the present study was to measure the effort required of hearing-impaired subjects when they listen to speech. METHOD Our subjects performed two tasks simultaneously: a speech task, which took the form of listening to connected(More)
This study examined the perceptual processing of time-gated auditory-visual (AV), auditory (A), and visual (V) spoken words. The primary goal was to assess the extent to which stimulus information versus perceptual processing limitations underlie modality-related perceptual encoding speed differences in AV, A, and V spoken word recognition. Another goal was(More)
Much recent research on acoustic cues for consonants' places of articulation has focused upon the nature of the rapid spectral changes that take place between signal portions corresponding to consonantal closure and adjacent vowels. The study reported here builds on the foundation laid by earlier studies that have explored techniques for representing(More)
OBJECTIVE This study examined relationships among sound level, subjective loudness, and reaction time in listeners with longstanding sensorineural hearing loss and loudness recruitment. DESIGN A simple reaction-time test was performed by 10 hearing-impaired (HI) subjects with varying degrees of recruitment and by 10 normal-hearing (NH) control subjects.(More)
  • 1