Corpus ID: 204852090

Face Behavior à la carte: Expressions, Affect and Action Units in a Single Network

@article{Kollias2019FaceB,
  title={Face Behavior {\`a} la carte: Expressions, Affect and Action Units in a Single Network},
  author={Dimitrios Kollias and Viktoriia Sharmanska and Stefanos Zafeiriou},
  journal={ArXiv},
  year={2019},
  volume={abs/1910.11111}
}
Automatic facial behavior analysis has a long history of studies in the intersection of computer vision, physiology and psychology. However it is only recently, with the collection of large-scale datasets and powerful machine learning methods such as deep neural networks, that automatic facial behavior analysis started to thrive. Three of its iconic tasks are automatic recognition of basic expressions (e.g. happy, sad, surprised), estimation of continuous emotions (e.g., valence and arousal… Expand

Figures and Tables from this paper

Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units and a Unified Framework
TLDR
A novel multi-task and holistic framework is presented which is able to jointly learn and effectively generalize and perform affect recognition over all existing in-the-wild databases. Expand
Knowledge Augmented Deep Neural Networks for Joint Facial Expression and Action Unit Recognition
TLDR
A constraint optimization method is proposed to encode the generic knowledge on expression-AUs probabilistic dependencies into a Bayesian Network (BN), then integrated into a deep learning framework as a weak supervision for an AU detection model. Expand
RAF-AU Database: In-the-Wild Facial Expressions with Subjective Emotion Judgement and Objective AU Annotations
TLDR
A sign-based and judgement-based approach to annotating blended facial expressions in the wild using crowdsourcing as a promising strategy for labeling in-the-wild facial expressions is developed and a baseline for AU recognition is provided using popular features and multi-label learning methods. Expand
Multi-term and Multi-task Affect Analysis in the Wild
TLDR
This paper introduces the affect recognition method that was submitted to the Affective Behavior Analysis in-the-wild (ABAW) 2020 Contest, and fuseed the VA and EXP models, taking into account that Valence, Arousal, and Expresion are closely related. Expand
A Multi-term and Multi-task Analyzing Framework for Affective Analysis in-the-wild
Human affective recognition is an important factor in human-computer interaction. However, the method development with in-the-wild data is not yet accurate enough for practical usage. In this paper,Expand
Multitask Emotion Recognition with Incomplete Labels
TLDR
This work trains a unified model to perform three tasks: facial action unit detection, expression classification, and valence-arousal estimation, and proposes an algorithm for the multitask model to learn from missing (incomplete) labels. Expand
A Multi-component CNN-RNN Approach for Dimensional Emotion Recognition in-the-wild
TLDR
The target has been to obtain best performance on the OMG-Emotion visual validation data set, while learning the respective visual training data set. Expand
Two-Stream Aural-Visual Affect Analysis in the Wild
TLDR
This work proposes a two-stream aural-visual analysis model to recognize affective behavior from videos that achieves promising results on the challenging Aff-Wild2 database. Expand
Prior Aided Streaming Network for Multi-task Affective Recognitionat the 2nd ABAW2 Competition
TLDR
This paper proposes a multi-task streaming network by a heuristic that the three emotion representations are intrinsically associated with each other and leverages an advanced facial expression embedding as prior knowledge, which is capable of capturing identity-invariant expression features while preserving the expression similarities to aid the down-streaming recognition tasks. Expand
A Multi-modal and Multi-task Learning Method for Action Unit and Expression Recognition
TLDR
A multi-modal and multi-task learning method by using both visual and audio information to train the model and apply a sequence model to further extract associations between video frames to demonstrate the effectiveness of this approach in improving model performance. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 52 REFERENCES
View-Independent Facial Action Unit Detection
  • Chuangao Tang, W. Zheng, +4 authors Zhen Cui
  • Computer Science
  • 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017)
  • 2017
TLDR
A simple and efficient deep learning based system to detect AU occurrence under nine different facial views and trains a corresponding expert network for each type of AU by specifically fine-tuning the VGG-Face network on cross-view facial images, so as to extract more discriminative features for the subsequent binary classification. Expand
Cross-dataset learning and person-specific normalisation for automatic Action Unit detection
TLDR
This paper presents a real-time Facial Action Unit intensity estimation and occurrence detection system based on appearance (Histograms of Oriented Gradients) and geometry features (shape parameters and landmark locations) and demonstrates the generalisability of this approach. Expand
A Multi-Task Learning & Generation Framework: Valence-Arousal, Action Units & Primary Expressions
TLDR
This paper first annotates a part of the Aff-Wild database in terms of AUs, then sets up and tackles multi-task learning for emotion recognition, as well as for facial image generation. Expand
Do Deep Neural Networks Learn Facial Action Units When Doing Expression Recognition?
TLDR
This work trains a zero-bias CNN on facial expression data and achieves, to the knowledge, state-of-the-art performance on two expression recognition benchmarks: the extended Cohn-Kanade (CK+) dataset and the Toronto Face Dataset (TFD). Expand
Expression, Affect, Action Unit Recognition: Aff-Wild2, Multi-Task Learning and ArcFace
TLDR
This work substantially extends the largest available in-the-wild database (Aff-Wild) to study continuous emotions such as valence and arousal and annotates parts of the database with basic expressions and action units, which allows the joint study of all three types of behavior states. Expand
EmotioNet: An Accurate, Real-Time Algorithm for the Automatic Annotation of a Million Facial Expressions in the Wild
TLDR
A novel computer vision algorithm is presented to annotate a large database of one million images of facial expressions of emotion in the wild that can be readily queried using semantic descriptions for applications in computer vision, affective computing, social and cognitive psychology and neuroscience. Expand
From Emotions to Action Units with Hidden and Semi-Hidden-Task Learning
TLDR
This paper investigates how the use of large databases labelled according to the 6 universal facial expressions can increase the generalization ability of Action Unit classifiers and proposes a novel learning framework: Hidden-Task Learning. Expand
Multimodal Spontaneous Emotion Corpus for Human Behavior Analysis
  • Zheng Zhang, J. Girard, +10 authors L. Yin
  • Computer Science
  • 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2016
TLDR
A well-annotated, multimodal, multidimensional spontaneous emotion corpus of 140 participants, which includes derived features from 3D, 2D, and IR (infrared) sensors and baseline results for facial expression and action unit detection is presented. Expand
Multiple facial action unit recognition enhanced by facial expressions
TLDR
This paper proposes a novel facial action unit recognition method enhanced by facial expressions, which is only required during training, and proposes a three-layer restricted Boltzmann machine (RBM) to capture the probabilistic dependencies among expressions and AUs. Expand
FATAUVA-Net: An Integrated Deep Learning Framework for Facial Attribute Recognition, Action Unit Detection, and Valence-Arousal Estimation
TLDR
This paper proposes an integrated deep learning framework for facial attribute recognition, AU detection, and V-A estimation, and the key idea is to apply AUs to estimate the V- A intensity since both AUs andV-A space could be utilized to recognize some emotion categories. Expand
...
1
2
3
4
5
...