Learn More
This paper proposes the Face Multimedia Object (FMO), and iFACE as a framework for implementing the face object within multimedia systems. FMO encapsulates all the functionality and data required for face animation. iFACE implements FMO and provides necessary interfaces for a variety of applications in order to access FMO services.
What visual cues do human viewers use to assign personality characteristics to animated characters? While most facial animation systems associate facial actions to limited emotional states or speech content, the present paper explores the above question by relating the perception of personality to a wide variety of facial actions (e.g., head(More)
We propose a method to extract the emotional data from a piece of music and then use that data via a remapping algorithm to automatically animate an emotional 3D face sequence. The method is based on studies of the emotional aspect of music and our parametric-based behavioral head model for face animation. We address the issue of affective communication(More)
This paper proposes the results of a user study on vision-based depth-sensitive input system for performing typical desktop tasks through arm gestures. We have developed a vision-based HCI prototype to be used for our comprehensive usability study. Using the Kinect 3D camera and OpenNI software library we implemented our system with high stability and(More)
Modern multimedia presentations are aggregations of objects with different types such as video and audio. Due to the importance of facial actions and expressions in verbal and non-verbal communication, the authors have proposed " face multimedia object " as a new higher-level media type that encapsulates all the requirements of facial animation for a(More)
Recommended by Soraia Musse This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a " game-like " health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning,(More)
Visual presentation of a talking person requires the generation of image frames showing the speaker in various views while pronouncing various phonemes. The existing approaches, mo stly use either a complex 3D geometric model to reconstruct a desired image or a set of 2D images for each viewpoint, to select from. We propose a new system which utilizes(More)