Björn Lidestam

Learn More
PURPOSE To study the role of visual perception of phonemes in visual perception of sentences and words among normal-hearing individuals. METHOD Twenty-four normal-hearing adults identified consonants, words, and sentences, spoken by either a human or a synthetic talker. The synthetic talker was programmed with identical parameters within phoneme groups,(More)
OBJECTIVE This case study tested the threshold hypothesis (Rönnberg et al., 1998), which states that superior speechreading skill is possible only if high-order cognitive functions, such as capacious verbal working memory, enable efficient strategies. DESIGN In a case study, a speechreading expert (AA) was tested on a number of speechreading and cognitive(More)
The present study had three aims: to examine the effects of displayed emotion and message length on speech-reading performance, and how measures of working memory (cf. Baddeley 1986) and verbal information processing speed relate to speech-reading performance. Words and sentences with either positive or negative meaning were used in a word decoding and a(More)
OBJECTIVE The aims of the current n200 study were to assess the structural relations between three classes of test variables (i.e. HEARING, COGNITION and aided speech-in-noise OUTCOMES) and to describe the theoretical implications of these relations for the Ease of Language Understanding (ELU) model. STUDY SAMPLE Participants were 200 hard-of-hearing(More)
A method for creating and presenting video-recorded synchronized audiovisual stimuli at a high frame rate-which would be highly useful for psychophysical studies on, for example, just-noticeable differences and gating-is presented. Methods for accomplishing this include recording audio and video separately using an exact synchronization signal, editing the(More)
UNLABELLED Discrimination of vowel duration was explored with regard to discrimination threshold, error bias, and effects of modality and consonant context. A total of 122 normal-hearing participants were presented with disyllabic-like items such as /lal-lal/ or /mam-mam/ in which the lengths of the vowels were systematically varied and were asked to judge(More)
This study investigated the degree to which audiovisual presentation (compared to auditory-only presentation) affected isolation point (IPs, the amount of time required for the correct identification of speech stimuli using a gating paradigm) in silence and noise conditions. The study expanded on the findings of Moradi et al. (under revision), using the(More)
A natural and a synthetic face were compared with regard to speech-reading performance, with a visual and an audio-visual condition, and with three levels of contextual cueing in an experiment with 90 normal-hearing subjects. Auditory presentation (speech in noise) served as a control condition. The results showed main effects for type of face, presentation(More)
In two experiments on visual speech-reading, with a total of 132 normal-hearing participants, the effects of displayed emotion and task specificity on speech-reading performance, on attitude toward the task and on person impression were explored, as well as associations between speech-reading performance, attitude, and person impression. The results show(More)
In the present study, the role of facial expressions in visual speechreading (lipreading) was examined. Speechreading was assessed by three different tests: sentence-based speechreading, word-decoding, and word discrimination. Twenty-seven individuals participated as subjects in the study. The results revealed that no general improvement as a function of(More)