Representing Affective Facial Expressions for Robots and Embodied Conversational Agents by Facial Landmarks

  title={Representing Affective Facial Expressions for Robots and Embodied Conversational Agents by Facial Landmarks},
  author={Caixia Liu and Jaap Ham and Eric O. Postma and Cees J. H. Midden and Bart Joosten and Martijn Goudbeek},
  journal={International Journal of Social Robotics},
Affective robots and embodied conversational agents require convincing facial expressions to make them socially acceptable. To be able to virtually generate facial expressions, we need to investigate the relationship between technology and human perception of affective and social signals. Facial landmarks, the locations of the crucial parts of a face, are important for perception of the affective and social signals conveyed by facial expressions. Earlier research did not use that kind of… 

Let's Face It: Probabilistic Multi-modal Interlocutor-aware Generation of Facial Gestures in Dyadic Settings

This paper introduces a probabilistic method to synthesize interlocutor-aware facial gestures - represented by highly expressive FLAME parameters - in dyadic conversations and shows that the model successfully leverages the input from the interLocutor to generate more appropriate behavior.

A Survey on Media Interaction in Social Robotics

A survey of recent works on media interaction in social robotics, which introduces the state-of-the-art social robots and the related concepts, and summarizes the event detection approaches which are crucial for robots to understand the environment and human intentions.

Toward an Expressive Bipedal Robot: Variable Gait Synthesis and Validation in a Planar Model

A framework is presented for stylistic gait generation in a compass-like under-actuated planar biped model, laying groundwork for creating a bipedal humanoid with variable socially competent movement profiles.

Detecting Social Signals with Spatiotemporal Gabor Filters

Wanneer mensen met elkaar communiceren bestaat de boodschap vaak uit meer dan alleen de gesproken woorden. Door middel van bijvoorbeeld gezichtsuitdrukkingen, intonatie, of lichaamshouding kunnen

The Face of the Robots.

Facial expression recognition based on Electroencephalogram and facial landmark localization.

A fusion facial expression recognition method based on EEG and facial landmark localization to improve the accuracy and generalization capability of Electroencephalogram (EEG) based facial expression Recognition.



Visualization of Facial Expression Deformation Applied to the Mechanism Improvement of Face Robot

This paper proposes a unique design approach, which uses reverse engineering techniques of three dimensional measurement and analysis, to visualize some critical facial motion data, including facial skin localized deformations, motion directions of facial features, and displacements of facial skin elements on a human face in different facial expressional states.

Do facial expressions signal specific emotions? Judging emotion from the face in context.

In specified circumstances, situational rather than facial information was predicted to determine the judged emotion in each of the 22 cases examined.

An interactive facial expression generation system

A simple music emotion analysis algorithm is proposed, which is coupled with the system to further demonstrate the effectiveness of the facial expression generation, and could identify the emotions of a music piece, and display the corresponding emotions via aforementioned synthesized facial expressions.

Sociable Machines: Expressive Social Ex-change Between Humans and Robots

Abstract : Sociable humanoid robots are natural and intuitive for people to communicate with and to teach. The author presents recent advances in building an autonomous humanoid robot, named

The psychology of facial expression: Frontmatter

Part I. Introduction: 1. What does a facial expression mean? James A. Russell and Jose-Miguel Fernandez-Dols 2. Methods for the study of facial behavior Hugh Wagner Part II. Three Broad Theoretical

Emotion and sociable humanoid robots

  • C. Breazeal
  • Computer Science
    Int. J. Hum. Comput. Stud.
  • 2003

Introducing the Geneva Multimodal expression corpus for experimental research on emotion perception.

A new, dynamic, multimodal corpus of emotion expressions, the Geneva Multimodal Emotion Portrayals Core Set (GEMEP-CS), is introduced and an associated database with microcoded facial, vocal, and body action elements, as well as observer ratings, is introduced.

Creating a Photoreal Digital Actor: The Digital Emily Project

The Digital Emily Project is a collaboration between facial animation company Image Metrics and the Graphics Laboratory at the University of Southern California’s Institute for Creative Technologies