Amaryllis Raouzaiou

Learn More
Affective and human-centered computing are two areas related to HCI which have attracted attention during the past years. One of the reasons that this may be attributed to, is the plethora of devices able to record and process multimodal input from the part of the users and adapt their functionality to their preferences or individual habits, thus enhancing(More)
Facial expression and hand gesture analysis plays a fundamental part in emotionally rich man-machine interaction (MMI) systems, since it employs universally accepted non-verbal cues to estimate the users’ emotional state. In this paper, we present a systematic approach to extracting expression related features from image sequences and inferring an emotional(More)
Affective and human-centered computing have attracted a lot of attention during the past years, mainly due to the abundance of devices and environments able to exploit multimodal input from the part of the users and adapt their functionality to their preferences or individual habits. In the quest to receive feedback from the users in an unobtrusive manner,(More)
In this paper we present a multimodal approach for the recognition of eight emotions that integrates information from facial expressions, body movement and gestures and speech. We trained and tested a model with a Bayesian classifier, using a multimodal corpus with eight emotions and ten subjects. First individual classifiers were trained for each modality.(More)
In the framework of MPEG-4 hybrid coding of natural and synthetic data streams, one can include teleconferencing and telepresence applications, in which a synthetic proxy or a virtual agent is capable of substituting the actual user. Such agents can interact with each other, analyzing input textual data entered by the user and multisensory data, including(More)
To create expressive body animation in Virtual Humans one uses motion captured sequences, because the results are more realistic and credible. This technology is expensive in time and cost. Its reusability is not evident and it depends on the quality of the descriptors one can adhere to animations. We propose an ontology where one can look for animations(More)
Extracting and validating emotional cues through analysis of users' facial expressions is of high importance for improving the level of interaction in man machine communication systems. Extraction of appropriate facial features and consequent recognition of the user's emotional state that can be robust to facial expression variations among different users(More)
In the framework of MPEG-4, one can include applications where virtual agents, utilizing both textual and multisensory data, including facial expressions and nonverbal speech, help systems become accustomed to the actual feelings of the user. Applications of this technology are expected in educational environments, virtual collaborative workplaces,(More)
Affective computing has been an extremely active research and development area for some years now, with some of the early results already starting to be integrated in human-computer interaction systems. Driven mainly by research initiatives in Europe, USA and Japan and accelerated by the abundance of processing power and low-cost, unintrusive sensors like(More)
This work is about multimodal and expressive synthesis on virtual agents, based on the analysis of actions performed by human users. As input we consider the image sequence of the recorded human behavior. Computer vision and image processing techniques are incorporated in order to detect cues needed for expressivity features extraction. The multimodality of(More)