Automatic Realtime User Performance-Driven Avatar Animation

Abstract

In this paper an approach for automatic userspecific 3D model generation and expression classification is proposed. User performance-driven avatar animation is recently in the focus of research due to the increasing amount of lowcost acquisition devices with integrated depth map computation. Thereby challenging is the user-specific emotion classification without a complex manual initialisation. Correct classification and emotion intensity identification can only be done with known expression specific facial feature displacement which differs from user to user. The use of facial feature tracking on predefined 3D model expression animations is presented here as solution statement for automatic emotion classification and intensity calculation. Consequently with this approach partial occlusions of a presented face do not hamper expression identification due to the symmetrical structure of human faces. Thus, a markerless, automatic and easy to use performance-driven avatar animation approach is presented.

DOI: 10.1109/SMC.2013.459

5 Figures and Tables

Cite this paper

@inproceedings{Behrens2013AutomaticRU, title={Automatic Realtime User Performance-Driven Avatar Animation}, author={Stephanie Behrens and Ayoub Al-Hamadi and Eicke Redweik and Robert Niese}, booktitle={SMC}, year={2013} }