Charissa R. Lansing

Learn More
The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. Many state-of-the-art audio signal processing algorithms, for example in hearing-aids and telephony, are expected to have the(More)
Two experiments were conducted to test the hypothesis that visual information related to segmental versus prosodic aspects of speech is distributed differently on the face of the talker. In the first experiment, eye gaze was monitored for 12 observers with normal hearing. Participants made decisions about segmental and prosodic categories for utterances(More)
The goals of this study were to measure sensitivity to the direct-to-reverberant energy ratio (D/R) across a wide range of D/R values and to gain insight into which cues are used in the discrimination process. The main finding is that changes in D/R are discriminated primarily based on spectral cues. Temporal cues may be used but only when spectral cues are(More)
This paper presents a two-microphone technique for localization of multiple sound sources. Its fundamental structure is adopted from a binaural signal-processing scheme employed in biological systems for the localization of sources using interaural time differences (ITD). The two input signals are transformed to the frequency domain and analyzed for(More)
Extraction of a target sound source amidst multiple interfering sound sources is difficult when there are fewer sensors than sources, as is the case for human listeners in the classic cocktail-party situation. This study compares the signal extraction performance of five algorithms using recordings of speech sources made with three different two-microphone(More)
In this study, we investigated where people look on talkers' faces as they try to understand what is being said. Sixteen young adults with normal hearing and demonstrated average speechreading proficiency were evaluated under two modality presentation conditions: vision only versus vision plus low-intensity sound. They were scored for the number of words(More)
This paper describes algorithms for signal extraction for use as a front-end of telecommunication devices, speech recognition systems, as well as hearing aids that operate in noisy environments. The development was based on some independent, hypothesized theories of the computational mechanics of biological systems in which directional hearing is enabled(More)
The purpose of this pilot study was to investigate adult Ineraid and Nucleus cochlear implant (CI) users' perceptual accuracy for melodic and rhythmic patterns, and quality ratings for different musical instruments. Subjects were 18 postlingually deafened adults with CI experience. Evaluative measures included the Primary Measures of Music Audiation (PMMA)(More)
The present study used a new method to develop video sequences that limited exposure of facial movement. A repeated-measures design was used to investigate the visual recognition of 60 monosyllabic spoken words, presented in an open set format, for two face exposure conditions (full-face vs. lips-plus-mandible). Twenty-six normal hearing college students(More)
The audiologic performance of 54 postlingually deafened adults wearing cochlear implants was uniformly evaluated. The participants had 9 months' or more experience with one of five different cochlear prostheses (Los Angeles Single Channel (N = 11), Vienna Single Channel (N = 4), Melbourne Multichannel (N = 18), Utah Multichannel (N = 19), San Francisco(More)