Learn More
The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. Many state-of-the-art audio signal processing algorithms, for example in hearing-aids and telephony, are expected to have the(More)
In this study, we investigated where people look on talkers' faces as they try to understand what is being said. Sixteen young adults with normal hearing and demonstrated average speechreading proficiency were evaluated under two modality presentation conditions: vision only versus vision plus low-intensity sound. They were scored for the number of words(More)
This paper presents a two-microphone technique for localization of multiple sound sources. Its fundamental structure is adopted from a binaural signal-processing scheme employed in biological systems for the localization of sources using interaural time differences (ITD). The two input signals are transformed to the frequency domain and analyzed for(More)
The goals of this study were to measure sensitivity to the direct-to-reverberant energy ratio (D/R) across a wide range of D/R values and to gain insight into which cues are used in the discrimination process. The main finding is that changes in D/R are discriminated primarily based on spectral cues. Temporal cues may be used but only when spectral cues are(More)
Two experiments were conducted to test the hypothesis that visual information related to segmental versus prosodic aspects of speech is distributed differently on the face of the talker. In the first experiment, eye gaze was monitored for 12 observers with normal hearing. Participants made decisions about segmental and prosodic categories for utterances(More)
The purpose of this pilot study was to investigate adult Ineraid and Nucleus cochlear implant (CI) users' perceptual accuracy for melodic and rhythmic patterns, and quality ratings for different musical instruments. Subjects were 18 postlingually deafened adults with CI experience. Evaluative measures included the Primary Measures of Music Audiation (PMMA)(More)
This paper describes algorithms for signal extraction for use as a front-end of telecommunication devices, speech recognition systems, as well as hearing aids that operate in noisy environments. The development was based on some independent, hypothesized theories of the computational mechanics of biological systems in which directional hearing is enabled(More)
Extraction of a target sound source amidst multiple interfering sound sources is difficult when there are fewer sensors than sources, as is the case for human listeners in the classic cocktail-party situation. This study compares the signal extraction performance of five algorithms using recordings of speech sources made with three different two-microphone(More)
Although Central Institute for the Deaf (CID) W-1 stimuli are routinely used for speech recognition threshold (SRT) testing, they are not always familiar to new learners of English and often lead to erroneous assessments. To improve test accuracy, alternative stimuli were constructed by pairing familiar English digits. These digit pairs were used to measure(More)
OBJECTIVE A pair of experiments investigated the hypothesis that bimodal (auditory-visual) speech presentation and expanded auditory bandwidth would improve speech intelligibility and increase working memory performance for older adults by reducing the cognitive effort needed for speech perception. BACKGROUND Although telephone communication is important(More)