A New Model of Speech Motor Control Based on Task Dynamics and State Feedback

  title={A New Model of Speech Motor Control Based on Task Dynamics and State Feedback},
  author={Vikram Ramanarayanan and Benjamin Parrell and Louis M. Goldstein and Srikantan S. Nagarajan and John F. Houde},
We present a new model of speech motor control (TD-SFC) based on articulatory goals that explicitly incorporates acoustic sensory feedback using a framework for state-based control. We do this by combining two existing, complementary models of speech motor control – the Task Dynamics model [1] and the State Feedback Control model of speech [2]. We demonstrate the effectiveness of the combined model by simulating a simple formant perturbation study, and show that the model qualitatively… 

Figures from this paper

The FACTS model of speech motor control: Fusing state estimation and task-based control

The FACTS model is able to qualitatively replicate many characteristics of the human speech system and replicates previously hypothesized trade-offs between reliance on auditory and somatosensory feedback in speech motor control and shows for the first time how this relationship may be mediated by acuity in each sensory domain.

Modeling the Role of Sensory Feedback in Speech Motor Control and Learning.

It is shown that both the directions into velocities of articulators and state feedback control/feedback aware control of tasks models can replicate key behaviors related to sensory feedback in the speech motor system.

Current models of speech motor control: A control-theoretic overview of architectures and properties.

The review builds an understanding of existing models from first principles, before moving into a discussion of several models, showing how each is constructed out of the same basic domain-general ideas and components-e.g., generalized feedforward, feedback, and model predictive components.

FACTS: A Hierarchical Task-based Control Model of Speech Incorporating Sensory Feedback

A computational model of speech motor control that integrates vocal tract state prediction with sensory feedback and is able to reproduce several important aspects of human speech behavior such as stable speech behavior in the presence of noisy motor and sensory systems.


Here we consider the application of state feedback control to stabilize an articulatory speech synthesizer during the generation of speech utterances. We first describe the architecture of such an

Current Speech Motor Control Models: An Overview of Architectures & Properties

An overview of the current state of computational modeling of speech motor control is provided by showing how each model is constructed out of these more basic domain-general ideas and components, allowing for a clear comparison between the proposed models.

Uncontrolled manifolds and short-delay reflexes in speech motor control : a modeling study

It was found that linearized UCMs, especially those that are specifically computed for each configuration, but also across many of the phonetic classes allow for a command perturbation response that is effective, which suggests that similar motor equivalence strategies can be implemented within each of these classes and that UCMs provide a valid characterization of an equivalence strategy.

The effect of real-time temporal auditory feedback perturbation on the timing of syllable structure

Perturbations of auditory feedback (AF) have proven very useful for studying the interaction between feedback and feedforward systems in speech production. AF clearly contributes crucially to

Articulatory variability and speech errors: An overview

An overview of a series of studies exploring the link between articulatory variability and speech errors in repetitive speech is presented and several important findings with respect to the behavior and appearance of errors are summarized.

Bayesian modeling of speech motor planning : variability, multisensory goals and perceptuo-motor interactions. (Modélisation Bayésienne de planification motrice de la parole : variabilité, buts multisensoriels et intéraction perceptuo-motrices)

The main goal of this thesis is to address the contextual and intrinsic component of speech variability in an integrative computational framework and postulate that the main component of the intrinsic variability of speech is not just execution noise, but that it results from a control strategy where intrinsic variability characterizes the abundance of possible productions of the intended speech item.

Motor Equivalence in Speech Production

The methodology used to investigate experimentally motor equivalence phenomena in speech production is presented, and implications are mainly related to characterization of the mechanisms underlying interarticulatory coordination and to the analysis of the speech production goals.

The DIVA model: A neural theory of speech acquisition and production

The latest version of the DIVA model of speech production, which contains a new right-lateralised feedback control map in ventral premotor cortex, will be described, and experimental results that motivated this new model component will be discussed.

Speech Production as State Feedback Control

This work discusses prior efforts to model the role of CNS in speech motor control, and argues that these models have inherent limitations – limitations that are overcome by an SFC model ofspeech motor control which is described.

A Dynamical Approach to Gestural Patterning in Speech Production

In this article, we attempt to reconcile the linguistic hypothesis that speech involves an underlying sequencing of abstract, discrete, context-independent units, with the empirical observation of

What Does Motor Efference Copy Represent? Evidence from Speech Production

MEG-I is used in human speakers to demonstrate that efference copy prediction does not track movement variability across repetitions of the same motor task, and the failure of the motor system to accurately predict less prototypical speech productions suggests that the efferent-driven suppression does not reflect a sensory prediction, but a sensory goal.

A task-dynamic toolkit for modeling the effects of prosodic structure on articulation

A set of recent developments to the task-dynamic ‘toolkit’ (planning oscillator ensemble and temporal modulation gestures) are described and how they have been used to interpret and simulate experimental data on the interactions of stress and prominence in shaping the “prosodically driven phonetic detail” of speech.

Neural mechanisms underlying auditory feedback control of speech

Adaptive control of vowel formant frequency: evidence from real-time formant manipulation.

In two studies the first formant of monosyllabic consonant-vowel-consonant words was shifted electronically and fed back to the participant very quickly so that participants perceived the modified speech as their own productions and appeared to more actively stabilize their productions from trial-to-trial.

Somatosensory function in speech perception

It is shown here that the somatosensory system is also involved in the perception of speech, and systematic perceptual variation observed in conjunction with speech-like patterns of skin stretch indicates that somatoensory inputs affect the neural processing of speech sounds.

Adaptive Optimal Feedback Control with Learned Internal Dynamics Models

This chapter combines the ILQG framework with learning the forward dynamics for simulated arms, which exhibit large redundancies, both, in kinematics and in the actuation to demonstrate how the approach can compensate for complex dynamic perturbations in an online fashion.