Dynamic facial expressions are processed holistically, but not more holistically than static facial expressions

@article{Tobin2016DynamicFE,
  title={Dynamic facial expressions are processed holistically, but not more holistically than static facial expressions},
  author={Alanna Tobin and Simone Favelle and Romina Palermo},
  journal={Cognition and Emotion},
  year={2016},
  volume={30},
  pages={1208 - 1221},
  url={https://api.semanticscholar.org/CorpusID:34169282}
}
Any advantage in recognising dynamic over static expressions is not likely to stem from enhanced holistic processing, rather motion may emphasise or disambiguate diagnostic featural information.

The role of facial movements in emotion recognition

Most past research on emotion recognition has used photographs of posed expressions intended to depict the apex of the emotional display. Although these studies have provided important insights into

Facial Movements Facilitate Part-Based, Not Holistic, Processing in Children, Adolescents, and Adults

The results suggest that contrary to the prevailing view, facial movements facilitate part-based, not holistic, face processing in children, adolescents, and adults.

Human and machine recognition of dynamic and static facial expressions: prototypicality, ambiguity, and complexity

The aim of this research was to test emotion recognition in static and dynamic facial expressions, thereby exploring the role of three featural parameters (prototypicality, ambiguity, and complexity) in human and machine analysis.

Differences in configural processing for human versus android dynamic facial expressions

The results suggest that dynamic facial expressions are processed in a synchrony-based configural manner for humans, but not for androids.

Eye Fixation Patterns for Categorizing Static and Dynamic Facial Expressions

This study directly compared the visual strategies underlying the recognition of static and dynamic facial expressions using eye tracking and the Bubbles technique, which revealed different eye fixation patterns with the 2 kinds of stimuli.

Emotional gist: the rapid perception of facial expressions

Results provide evidence that the holistic gist perception of expression cannot be overridden by selective attention.

The Influence of Key Facial Features on Recognition of Emotion in Cartoon Faces

The results show that happy cartoon expressions were recognized more accurately than neutral and sad expressions, which was consistent with the happiness recognition advantage revealed in real face studies.

Power of averaging: Noise reduction by ensemble coding of multiple faces.

The results suggest that ensemble coding provides a powerful mechanism of noise cancellation involved with individual representation in face perception and ensemble coding.

Brain networks processing temporal information in dynamic facial expressions

It is shown that dynamic expressions with synchronous movement cues may distinctively engage brain areas responsible for motor execution of expressions, and that synchronous expressions distinctively engaged medial prefrontal areas in the ventral anterior cingulate cortex, supplementary premotor areas, and bilateral superior frontal gyrus.

Can Perceivers Differentiate Intense Facial Expressions? Eye Movement Patterns

Recent research on intense real-life faces has shown that although there was an objective difference in facial activities between intense winning faces and losing faces, viewers failed to

Mixed emotions: Holistic and analytic perception of facial expressions

In this study, happy and angry composite expressions were created in which the top and bottom face halves formed either an incongruent or congruent composite expression (e.g., angry top + happy bottom) and the results were discussed in terms of holistic and analytic processing of facial expressions.

Recognition of emotion in moving and static composite faces

This paper investigates whether the greater accuracy of emotion identification for dynamic versus static expressions, as noted in previous research, can be explained through heightened levels of

Emotion recognition: the role of facial movement and the relative importance of upper and lower areas of the face.

The results demonstrated that moving displays of happiness, sadness, fear, surprise, anger and disgust were recognized more accurately than static displays of the white spots at the apex of the expressions, indicating that facial motion, in the absence of information about the shape and position of facial features, is informative about these basic emotions.

Is there a dynamic advantage for facial expressions?

With a threshold model, whether discriminative information is integrated more effectively in dynamic than in static conditions is tested and it is found that neither identification accuracy nor RTs supported the dynamic advantage hypothesis.

Featural evaluation, integration, and judgment of facial affect.

The paradigm of the fuzzy logical model of perception (FLMP) is extended to the domain of perception and recognition of facial affect and results indicate that participants evaluated and integrated information from both features to perceive affective expressions.

Featural processing in recognition of emotional facial expressions

The complexity of the results suggests that the recognition process of emotional facial expressions cannot be reduced to a simple feature processing or holistic processing for all emotions.

Studying the dynamics of emotional expression using synthesized facial muscle movements.

Synthetic images of facial expression were used to assess whether judges can correctly recognize emotions exclusively on the basis of configurations of facial muscle movements, and the effect of static versus dynamic presentation of the expressions was studied.

Anti-Expression Aftereffects Reveal Prototype-Referenced Coding of Facial Expressions

Evidence that visual representations of facial expressions of emotion are coded with reference to a prototype within a multidimensional framework is provided.

Moving faces, looking places: validation of the Amsterdam Dynamic Facial Expression Set (ADFES).

Two studies validating a new standardized set of filmed emotion expressions, the Amsterdam Dynamic Facial Expression Set (ADFES), show that participants more strongly perceived themselves to be the cause of the other's emotion when the model's face turned toward the respondents.