Facial Expression Recognition from World Wild Web

  title={Facial Expression Recognition from World Wild Web},
  author={Ali Mollahosseini and Behzad Hassani and Michelle J. Salvador and Hojjat Abdollahi and David Chan and Mohammad H. Mahoor},
  journal={2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
Recognizing facial expression in a wild setting has remained a challenging task in computer vision. [] Key Method Three search engines were queried using 1250 emotion related keywords in six different languages and the retrieved images were mapped by two annotators to six basic expressions and neutral.

Figures and Tables from this paper

Recognition of facial expressions based on CNN features
A method for facial expression recognition based on features extracted with convolutional neural networks (CNN), taking advantage of a pre-trained model in similar tasks and able to recognize six universal expressions with an accuracy above 92% considering five of the widely used databases.
Convolutional Neural Networks Models for Facial Expression Recognition
This research has built an image recognition system of emotion expression using Convolutional Neural Networks by comparing two configurations using batch size 8 and 128 with two datasets that are FER-2013, self-created dataset and cross-dataset against four emotion expressions related to customer service.
AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild
AffectNet is by far the largest database of facial expression, valence, and arousal in the wild enabling research in automated facial expression recognition in two different emotion models and various evaluation metrics show that the deep neural network baselines can perform better than conventional machine learning methods and off-the-shelf facial expressions recognition systems.
A Compact Embedding for Facial Expression Similarity
The goal is to describe facial expressions in a continuous fashion using a compact embedding space that mimics human visual preferences, and it is shown that the embedding learned using the proposed dataset performs better than several other embeddings learned using existing emotion or action unit datasets.
Recognizing Facial Expressions of Occluded Faces using Convolutional Neural Networks
An approach based on convolutional neural networks for facial expression recognition in a difficult setting with severe occlusions is presented, proving that there are enough clues in the lower part of the face to accurately predict facial expressions.
Learning to Augment Expressions for Few-shot Fine-grained Facial Expression Recognition
A novel Fine-grained Facial Expression Database - F2ED is contributed in this paper and it includes more than 200k images with 54 facial expressions from 119 persons, and a unified task-driven framework - Compositional Generative Adversarial Network (Comp-GAN) learning to synthesize facial images and thus augmenting the instances of few-shot expression classes.
Spatio-Temporal Facial Expression Recognition Using Convolutional Neural Networks and Conditional Random Fields
  • Behzad Hassani, M. Mahoor
  • Computer Science
    2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017)
  • 2017
The experimental results show that cascading the deep network architecture with the CRF module considerably increases the recognition of facial expressions in videos and in particular it outperforms the state-of- the-art methods in the cross-database experiments and yields comparable results in the subject-independent experiments.
A Survey on Factors Affecting Facial Expression Recognition based on Convolutional Neural Networks
This survey provides a critique of past work, recommendations for each step of the process needed for FER, and list some open, unanswered questions in FER that deserve further investigation.
Facial Expression Recognition Using Enhanced Deep 3 D Convolutional Neural Networks
This paper proposes a 3D Convolutional Neural Network method for FER in videos that outperforms state-of-the-art methods and emphasizes on the importance of facial components rather than the facial regions that may not contribute significantly to generating facial expressions.
Deep features-based expression-invariant tied factor analysis for emotion recognition
This paper proposes sequential-based and image-based tied factor analysis frameworks with a deep network that simultaneously addresses these two problems, and uses a Gaussian probabilistic approach to design an efficient classifier for temporal facial expression recognition.


Going deeper in facial expression recognition using deep neural networks
A deep neural network architecture to address the FER problem across multiple well-known standard face datasets is proposed, comparable to or better than the state-of-the-art methods and better than traditional convolutional neural networks in both accuracy and training time.
Web-based database for facial expression analysis
The MMI facial expression database is presented, which includes more than 1500 samples of both static images and image sequences of faces in frontal and in profile view displaying various expressions of emotion, single and multiple facial muscle activation.
Facial expression recognition based on Local Binary Patterns: A comprehensive study
Image based Static Facial Expression Recognition with Multiple Deep Network Learning
This work reports the proposed image based static facial expression recognition method for the Emotion Recognition in the Wild Challenge (EmotiW) 2015, and presents two schemes for learning the ensemble weights of the network responses by minimizing the log likelihood loss and the hinge loss.
Why is facial expression analysis in the wild challenging?
It turns out that under close-to-real conditions, especially with co-occurring speech, it is hard even for humans to assign emotion labels to clips when only taking video into account, so the challenges for facial expression analysis in the wild are discussed.
Static facial expression analysis in tough conditions: Data, evaluation protocol and benchmark
A person independent training and testing protocol for expression recognition as part of the BEFIT workshop is proposed and a new static facial expression database Static Facial Expressions in the Wild (SFEW) is presented.
Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected "In-the-Wild"
This work presents a comprehensively labeled dataset of ecologically valid spontaneous facial responses recorded in natural settings over the Internet, available for distribution to researchers online.
Emotion recognition in the wild via sparse transductive transfer linear discriminant analysis
A sparse transductive transfer linear discriminant analysis (STTLDA) for facial expression recognition and speech emotion recognition under real-world environments, respectively is developed and is the first to consider emotion recognition in the wild as a transfer learning problem and use the transduction transfer learning method to eliminate the distribution difference between training and testing samples caused by the “wild”.