Face to Virtual Face
- N. M. Thalmann, P. Kalra, M. Escher
- Proc. of IEEE,
This paper describes different processes and their interactions needed to generate a virtual environment inhabited by a clone representing real people and virtual autonomous actors. It requires communication between a cloned face (or avatar) and virtual face. This needs the cloning and mimicking aspects to reconstruct the 3D model and movements of the real face. The autonomous virtual face is able to respond and interact through facial expressions and speech. Several main processing are necessary to reach this goal. The processing of the input data is crucial since it represents the only interaction of the user with the virtual world and autonomous actor. We have implemented the processing of the two basic media used in a dialog, which are speech and facial expressions. We also discuss about the implementation of the emotionally autonomous actor. Finally we give a description of the real-time facial animation system. The whole system is based on the MPEG-4 definition of FAP, visemes and expressions.