Learn More
The present study had three aims: to examine the effects of displayed emotion and message length on speech-reading performance, and how measures of working memory (cf. Baddeley 1986) and verbal information processing speed relate to speech-reading performance. Words and sentences with either positive or negative meaning were used in a word decoding and a(More)
This study aimed to measure the initial portion of signal required for the correct identification of auditory speech stimuli (or isolation points, IPs) in silence and noise, and to investigate the relationships between auditory and cognitive functions in silence and noise. Twenty-one university students were presented with auditory stimuli in a gating(More)
PURPOSE To study the role of visual perception of phonemes in visual perception of sentences and words among normal-hearing individuals. METHOD Twenty-four normal-hearing adults identified consonants, words, and sentences, spoken by either a human or a synthetic talker. The synthetic talker was programmed with identical parameters within phoneme groups,(More)
A natural and a synthetic face were compared with regard to speech-reading performance, with a visual and an audio-visual condition, and with three levels of contextual cueing in an experiment with 90 normal-hearing subjects. Auditory presentation (speech in noise) served as a control condition. The results showed main effects for type of face, presentation(More)
In two experiments on visual speech-reading, with a total of 132 normal-hearing participants, the effects of displayed emotion and task specificity on speech-reading performance, on attitude toward the task and on person impression were explored, as well as associations between speech-reading performance, attitude, and person impression. The results show(More)
This study investigated the degree to which audiovisual presentation (compared to auditory-only presentation) affected isolation point (IPs, the amount of time required for the correct identification of speech stimuli using a gating paradigm) in silence and noise conditions. The study expanded on the findings of Moradi et al. (under revision), using the(More)
UNLABELLED Discrimination of vowel duration was explored with regard to discrimination threshold, error bias, and effects of modality and consonant context. A total of 122 normal-hearing participants were presented with disyllabic-like items such as /lal-lal/ or /mam-mam/ in which the lengths of the vowels were systematically varied and were asked to judge(More)
In the present study, the role of facial expressions in visual speechreading (lipreading) was examined. Speechreading was assessed by three different tests: sentence-based speechreading, word-decoding, and word discrimination. Twenty-seven individuals participated as subjects in the study. The results revealed that no general improvement as a function of(More)
This study compared elderly hearing aid (EHA) users and elderly normal-hearing (ENH) individuals on identification of auditory speech stimuli (consonants, words, and final word in sentences) that were different when considering their linguistic properties. We measured the accuracy with which the target speech stimuli were identified, as well as the(More)
The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences)(More)