The motor theory of speech perception revised

  title={The motor theory of speech perception revised},
  author={Alvin M. Liberman and Ignatius G. Mattingly},
Action to Language via the Mirror Neuron System: Lending a helping hand to hearing: another motor theory of speech perception
© Cambridge University Press 2006. … any comprehensive account of how speech is perceived should encompass audiovisual speech perception. The ability to see as well as hear has to be integral to the
Speech perception as non-symbolic pattern recognition
Despite ongoing research, the human ability of speech perception remains a mystery. Current phonetic theory is divided by two points of contention: the relationship from production to signal to
Recognizing speech in a novel accent: the motor theory of speech perception reframed
A novel computational model of how a listener comes to understand the speech of someone speaking the listener’s native language with a foreign accent, which serves as a reference point for the discussion in Part 3, which proposes a dual-stream neuro-linguistic architecture.
When Theories of Speech Meet the Real World
Speech has the corollary advantage that it is managed by a module biologically adapted to circumvent limitations of tongue and ear by automatically coarticulating the constituent gestures and coping with the complex acoustic consequences.
Mirror Neurons, the Motor System and Language: From the Motor Theory to Embodied Cognition and Beyond
Evidence is cited that confirms the failure of the motor theory to accurately describe perceptive processes in speech, and promotes the conclusion that speech representations are fundamentally sensory in nature.
Language perception activates the hand motor cortex: implications for motor theories of speech perception
The hand motor system is found to be activated by linguistic tasks, most notably pure linguistic perception, but not by auditory or visuospatial processing, which supports the theory that language may have evolved within a general and bilateral action‐perception network.
Perceptual-Motor Processing in Speech


Integration of featural information in speech perception.
A model for the identification of speech sounds is proposed that assumes that the acoustic cues are perceived independently, and provides a good description of the data, including these boundary changes, while still maintaining complete noninteraction at the feature evaluation stage of processing.
Segmentation of coarticulated speech in perception
  • C. Fowler
  • Physics
    Perception & psychophysics
  • 1984
The research investigates how listeners segment the acoustic speech signal into phonetic segments and explores implications that the segmentation strategy may have for their perception of the
Perceptual equivalence of acoustic cues in speech and nonspeech perception
Trading relations between speech cues, and the perceptual equivalence that underlies them, thus appear to derive specifically from perception of phonetic information.
Some differences between phonetic and auditory modes of perception
The Speech Code and the Physiology of Language
To the physiologist who would study language in terms of the interests represented at this symposium, the most obvious linguistic processes—the selection of words to convey meaning and the
The effect of discrimination training on speech perception: Noncategorical perception
Three subjects were given extensive practice in discriminating syllables which differed in voice onset time. For these subjects, there were two major findings. First, discrimination of speech follows
Duplex perception: Confirmation of fusion
Both experiments support the hypothesis that the speech percept in the duplex situation results from dichotic fusion at a relatively early stage in processing.
Discrimination of speech by nonhuman animals: Basic auditory sensitivities conducive to the perception of speech‐sound categories
Chinchillas (Chinchilla laniger) were tested in a same–different task to determine the location of greatest sensitivity along a continuum of voice‐onset‐time (VOT). The procedure used was an up–down
Dynamic specification of coarticulated vowels.
Dynamic spectral information, contained in initial and final transitions taken together, was sufficient for accurate identification of vowels even when vowel nuclei were attenuated to silence and appeared to be efficacious even when durational parameters specifying intrinsic vowel length were eliminated.