Learn More
This paper proposes the Face Multimedia Object (FMO), and iFACE as a framework for implementing the face object within multimedia systems. FMO encapsulates all the functionality and data required for face animation. iFACE implements FMO and provides necessary interfaces for a variety of applications in order to access FMO services.
What visual cues do human viewers use to assign personality characteristics to animated characters? While most facial animation systems associate facial actions to limited emotional states or speech content, the present paper explores the above question by relating the perception of personality to a wide variety of facial actions (e.g., head(More)
Modern multimedia presentations are aggregations of objects with different types such as video and audio. Due to the importance of facial actions and expressions in verbal and non-verbal communication, the authors have proposed " face multimedia object " as a new higher-level media type that encapsulates all the requirements of facial animation for a(More)
This paper proposes the results of a user study on vision-based depth-sensitive input system for performing typical desktop tasks through arm gestures. We have developed a vision-based HCI prototype to be used for our comprehensive usability study. Using the Kinect 3D camera and OpenNI software library we implemented our system with high stability and(More)
Recommended by Soraia Musse This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a " game-like " health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning,(More)
– This paper addresses the issue of affective communication remapping, i.e. translation of affective content from one communication form to another. We propose a method to extract the affective data from a piece of music and then use that to animate a face. The method is based on studies of emotional aspect of music and our behavioural head model for face(More)
This paper describes a behavioural model for affective social agents based on three independent but interacting parameter spaces: Knowledge, Personality, and Mood. These spaces control a lower-level Geometry space that provides parameters at the facial feature level. Personality and Mood use findings in behavioural psychology to relate the perception of(More)
Visual presentation of a talking person requires the generation of image frames showing the speaker in various views while pronouncing various phonemes. The existing approaches, mo stly use either a complex 3D geometric model to reconstruct a desired image or a set of 2D images for each viewpoint, to select from. We propose a new system which utilizes(More)