Edward T. Auer

Learn More
Perceptual identification of spoken words in noise is less accurate when the target words are preceded by spoken phonetically related primes (Goldinger, Luce, & Pisoni, 1989). The present investigation replicated and extended this finding. Subjects shadowed target words presented in the clear that were preceded by phonetically related or unrelated primes.(More)
Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and(More)
Speech perception is conventionally thought to be an auditory function, but humans often use their eyes to perceive speech. We investigated whether visual speech perception depends on processing by the primary auditory cortex in hearing adults. In a functional magnetic resonance imaging experiment, a pulse-tone was presented contrasted with gradient noise.(More)
Neuroplastic changes in auditory cortex as a result of lifelong perceptual experience were investigated. Adults with early-onset deafness and long-term hearing aid experience were hypothesized to have undergone auditory cortex plasticity due to somatosensory stimulation. Vibrations were presented on the hand of deaf and normal-hearing participants during(More)
Word recognition is generally assumed to be achieved via competition in the mental lexicon between phonetically similar word forms. However, this process has so far been examined only in the context of auditory phonetic similarity. In the present study, we investigated whether the influence of word-form similarity on word recognition holds in the visual(More)
PURPOSE L. E. Bernstein, M. E. Demorest, and P. E. Tucker (2000) demonstrated enhanced speechreading accuracy in participants with early-onset hearing loss compared with hearing participants. Here, the authors test the generalization of Bernstein et al.'s (2000) result by testing 2 new large samples of participants. The authors also investigated correlates(More)
The cortical processing of auditory-alone, visual-alone, and audiovisual speech information is temporally and spatially distributed, and functional magnetic resonance imaging (fMRI) cannot adequately resolve its temporal dynamics. In order to investigate a hypothesized spatiotemporal organization for audiovisual speech processing circuits, event-related(More)
This study examines relationships between external face movements, tongue movements, and speech acoustics for consonantvowel (CV) syllables and sentences spoken by two male and two female talkers with different visual intelligibility ratings. The questions addressed are how relationships among measures vary by syllable, whether talkers who are more(More)
A lexical modeling methodology was employed to examine how the distribution of phonemic patterns in the lexicon constrains lexical equivalence under conditions of reduced phonetic distinctiveness experienced by speech-readers. The technique involved (1) selection of a phonemically transcribed machine-readable lexical database, (2) definition of(More)