Expressive speech-driven facial animation

@article{Cao2005ExpressiveSF,
  title={Expressive speech-driven facial animation},
  author={Yong Cao and Wen C. Tien and Petros Faloutsos and Fr{\'e}d{\'e}ric H. Pighin},
  journal={ACM Trans. Graph.},
  year={2005},
  volume={24},
  pages={1283-1302}
}
Speech-driven facial motion synthesis is a well explored research topic. However, little has been done to model expressive visual behavior during speech. We address this issue using a machine learning approach that relies on a database of speech-related high-fidelity facial motions. From this training set, we derive a generative model of expressive facial motion that incorporates emotion control, while maintaining accurate lip-synching. The emotional content of the input speech can be manually… CONTINUE READING
Highly Cited
This paper has 124 citations. REVIEW CITATIONS

From This Paper

Topics from this paper.

Citations

Publications citing this paper.

125 Citations

01020'08'11'14'17
Citations per Year
Semantic Scholar estimates that this publication has 125 citations based on the available data.

See our FAQ for additional information.

References

Publications referenced by this paper.
Showing 1-5 of 5 references

Radial Basis Functions : Theory and Implementations

  • M. D. BUHMANN
  • Cambridge University Press.
  • 2003
Highly Influential
5 Excerpts

Time Warps, String Edits, and Macromolecules: The Theory and Practice of Sequence Comparison

  • D. SANKOFF, J. B. KRUSKAL
  • CSLI Publications.
  • 1983
Highly Influential
5 Excerpts

Voice puppetry

  • M. BRAND
  • Proceedings of ACM SIGGRAPH 1999. ACM Press…
  • 1999
Highly Influential
4 Excerpts

Similar Papers

Loading similar papers…