Spatial and Temporal Linearities in Posed and Spontaneous Smiles

@article{Trutoiu2014SpatialAT,
  title={Spatial and Temporal Linearities in Posed and Spontaneous Smiles},
  author={Laura C. Trutoiu and Elizabeth Jeanne Carter and Nancy S. Pollard and Jeffrey F. Cohn and Jessica K. Hodgins},
  journal={ACM Transactions on Applied Perception (TAP)},
  year={2014},
  volume={11},
  pages={1 - 15}
}
Creating facial animations that convey an animator’s intent is a difficult task because animation techniques are necessarily an approximation of the subtle motion of the face. Some animation techniques may result in linearization of the motion of vertices in space (blendshapes, for example), and other, simpler techniques may result in linearization of the motion in time. In this article, we consider the problem of animating smiles and explore how these simplifications in space and time affect… 

Data-Driven Model for Spontaneous Smiles

TLDR
This work presents a generative model for spontaneous smiles that preserves their dynamics and can thus be used to generate genuine animations, and suggests that data-driven interpolation functions accompanied by realistic head motions can be used by animators to generate more genuine smiles.

Perceptually Valid Dynamics for Smiles and Blinks

TLDR
A framework to explore representations of two key facial expressions, blinks and smiles, and it is shown that data-driven models are needed to realistically animate these expressions and can inform the design of realistic animation systems by highlighting common assumptions that over-simplify the dynamics of expressions.

Boxing the face: A comparison of dynamic facial databases used in facial analysis and animation

TLDR
This paper focuses on the selection of technically adequate databases that offer sufficient resolution of the face and expressions to allow adequate modelling of facial dynamics in still images.

Dynamic properties of successful smiles

TLDR
A computer-animated 3D facial tool is used to investigate how dynamic properties of a smile are perceived, finding that a successful smile can be expressed via a variety of different spatiotemporal trajectories, involving an intricate balance of mouth angle, smile extent, and dental show combined with dynamic symmetry.

Dynamics of facial actions for assessing smile genuineness.

TLDR
This work explores the possibilities of extracting discriminative features directly from the dynamics of facial action units to differentiate between genuine and posed smiles and develops a new technique for identifying the smile phases, which is robust against the noise and allows for continuous analysis of facial videos.

In Search of Truth: Analysis of Smile Intensity Dynamics to Detect Deception

TLDR
The results of experimental validation indicate high competitiveness of the method for the UvA-NEMO benchmark database, which allows for real-time discrimination between posed and spontaneous expressions at the early smile onset phase.

Quantitative Laughter Detection, Measurement, and Classification—A Critical Survey

TLDR
This survey aims at collecting and presenting objective measurement methods and results from a variety of different studies in different fields, to contribute to build a unified model and taxonomy of laughter.

The McNorm library: creating and validating a new library of emotionally expressive whole body dance movements

The ability to exchange affective cues with others plays a key role in our ability to create and maintain meaningful social relationships. We express our emotions through a variety of socially

Relaxed Spatio-Temporal Deep Feature Aggregation for Real-Fake Expression Prediction

  • Savas ÖzkanG. Akar
  • Computer Science
    2017 IEEE International Conference on Computer Vision Workshops (ICCVW)
  • 2017
TLDR
A learnable aggregation technique whose primary objective is to retain short-time temporal structure between frame-level features and their spatial interdependencies in the representation that can be extended to different problems such as action/event recognition in future.

References

SHOWING 1-10 OF 27 REFERENCES

Evaluating the perceptual realism of animated facial expressions

TLDR
This work presents an integrated framework in which psychophysical experiments are used in a first step to systematically evaluate the perceptual quality of several different computer-generated animations with respect to real-world video sequences and provides important insights into facial expressions for both the perceptual and computer graphics community.

Modeling and animating eye blinks

TLDR
It is found that the animated blinks generated from the human data model with fully closing eyelids are consistently perceived as more natural than those created using the various types of blink dynamics proposed in animation textbooks.

Perception of linear and nonlinear motion properties using a FACS validated 3D facial model

TLDR
This paper presents the first Facial Action Coding System (FACS) valid model to be based on dynamic 3D scans of human faces for use in graphics and psychological research and reveals a significant overall benefit to using captured nonlinear geometric vertex motion over linear blend shape motion.

Perceptual effects of damped and exaggerated facial motion in animated characters

TLDR
It was discovered that motion changes ±20% from original motion affected perceptions of likeability and intelligence differently in the realistic-looking and cartoon characters.

The temporal connection between smiles and blinks

TLDR
Evidence for a temporal relationship between eye blinking and smile dynamics (smile onset and offset) is presented and a marginally significant effect suggests that eye blinks are suppressed (less frequent) before smile onset.

Interactive region-based linear 3D face models

TLDR
This paper presents a linear face modelling approach that generalises to unseen data better than the traditional holistic approach while also allowing click-and-drag interaction for animation.

Bilinear spatiotemporal basis models

TLDR
The bilinear model is applied to natural spatiotemporal phenomena, including face, body, and cloth motion data, and compared in terms of compaction, generalization ability, predictive precision, and efficiency to existing models.

The Timing of Facial Motion in posed and Spontaneous Smiles

TLDR
These findings suggest that by extracting and representing dynamic as well as morphological features, automatic facial expression analysis can begin to discriminate among the message values of morphologically similar expressions.

All Smiles are Not Created Equal: Morphology and Timing of Smiles Perceived as Amused, Polite, and Embarrassed/Nervous

TLDR
Compared three perceptually distinct types of smiles, it was found that perceived smile meanings were related to specific variation in smile morphological and dynamic characteristics.

Can Duchenne smiles be feigned? New evidence on felt and false smiles.

TLDR
The predictive value of the D smile in these judgment studies was limited compared with other features such as asymmetry, apex duration, and nonpositive facial actions, and was only significant for ratings of the upper face and static displays.