Automated gesturing for virtual characters: speech-driven and text-driven approaches


We present two methods for automatic facial gesturing of graphically embodied animated agents. In one case, conversational agent is driven by speech in automatic lip sync process. By analyzing speech input, lip movements are determined from the speech signal. Another method provides virtual speaker capable of reading plain English text and rendering it in a… (More)
DOI: 10.4304/jmm.1.1.62-68


9 Figures and Tables