Phoneme-level articulatory animation in pronunciation training

Abstract

Speech visualization is extended to use animated talking heads for computer assisted pronunciation training. In this paper, we design a data-driven 3D talking head system for articulatory animations with synthesized articulator dynamics at the phoneme level. A database of AG500 EMA-recordings of three-dimensional articulatory movements is proposed to explore the distinctions of producing the sounds. Visual synthesis methods are then investigated, including a phoneme-based articulatory model with a modified blending method. A commonly used HMM-based synthesis is also performed with a Maximum Likelihood Parameter Generation algorithm for smoothing. The 3D articulators are then controlled by synthesized articulatory movements, to illustrate both internal and external motions. Experimental results have shown the performances of visual synthesis methods by root mean square errors. A perception test is then presented to evaluate the 3D animations, where a word identification accuracy is 91.6% among 286 tests, and an average realism score is 3.5 (1 = bad to 5 = excellent). 2012 Elsevier B.V. All rights reserved.

DOI: 10.1016/j.specom.2012.02.003

Extracted Key Phrases

16 Figures and Tables

Cite this paper

@article{Wang2012PhonemelevelAA, title={Phoneme-level articulatory animation in pronunciation training}, author={Lan Wang and Hui Chen and Sheng Li and Helen M. Meng}, journal={Speech Communication}, year={2012}, volume={54}, pages={845-856} }