PersEmoN: A Deep Network for Joint Analysis of Apparent Personality, Emotion and Their Relationship

@article{Zhang2018PersEmoNAD,
  title={PersEmoN: A Deep Network for Joint Analysis of Apparent Personality, Emotion and Their Relationship},
  author={Le Zhang and Songyou Peng and Stefan Winkler},
  journal={ArXiv},
  year={2018},
  volume={abs/1811.08657}
}
Apparent personality and emotion analysis are both central to affective computing. Existing works solve them individually. In this paper we investigate if such high-level affect traits and their relationship can be jointly learned from face images in the wild. To this end, we introduce PersEmoN, an end-to-end trainable and deep Siamese-like network. It consists of two convolutional network branches, one for emotion and the other for apparent personality. Both networks share their bottom feature… Expand
Being the center of attention: A Person-Context CNN framework for Personality Recognition
TLDR
A novel multi-stream Convolutional Neural Network framework (CNN), which considers multiple sources of information, and presents CNN class activation maps for each personality trait, shedding light on behavioral patterns linked with personality attributes. Expand
Being the Center of Attention
TLDR
A novel multi-stream Convolutional Neural Network framework, which considers multiple sources of information, and presents CNN class activation maps for each personality trait, shedding light on behavioral patterns linked with personality attributes. Expand
Attention Learning with Retrievable Acoustic Embedding of Personality for Emotion Recognition
  • Jeng-Lin Li, Chi-Chun Lee
  • Computer Science
  • 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII)
  • 2019
TLDR
This work proposes a Personal Attribute-Aware Attention Network (PAaAN) that learns its multimodal attention weights jointly with the target speaker's retrievable acoustic embedding of personality and achieves a 70% unweighted accuracy in the IEMOCAP 4-class multi-class emotion recognition task. Expand
Single-Modal Video Analysis of Personality Traits using Low-Level Visual Features
  • Daniel Helm, M. Kampel
  • Computer Science, Psychology
  • 2020 Tenth International Conference on Image Processing Theory, Tools and Applications (IPTA)
  • 2020
TLDR
This paper investigates how various pre-processing methods, such as face-extraction and data-augmentation, influence the predicted personality confidences and explores different training strategies and optimization techniques e.g. regularization in order to improve the model performance. Expand
Simultaneous prediction of valence / arousal and emotion categories and its application in an HRC scenario
TLDR
The proposed approach predicts both basic emotion and valence/arousal values as a continuous measure for the emotional state and uses it to measure the emotional states of users in an Human-Robot-Collaboration scenario (HRC), and examines how different feedback mechanisms counteract negative emotions users experience while interacting with a robot system. Expand
Simultaneous Prediction of Valence / Arousal and Emotion Categories in Real-time
TLDR
Evaluation on the AffectNet dataset and cross-database evaluation on the Aff-Wild dataset shows that the proposed approach predicts emotion categories and valence and arousal values with high accuracies. Expand
Psycholinguistic Tripartite Graph Network for Personality Detection
TLDR
Benefiting from the tripartite graph, TrigNet can aggregate post information from a psychological perspective, which is a novel way of exploiting domain knowledge. Expand
Action Recognition Using Co-trained Deep Convolutional Neural Networks
TLDR
This work proposes a novel semi-supervised learning approach that allows multiple streams to supervise each other in a co-training strategy, thus making the training simultaneous in the two modalities, and demonstrates the effectiveness of the approach through extensive experiments on the UCF 101 and HMDB datasets. Expand
A Survey on Personality-Aware Recommendation Systems
TLDR
This survey explores the different design choices of personality-aware recommendation systems, by comparing their personality modeling methods, as well as their recommendation techniques. Expand
Artificial Intelligence. IJCAI 2019 International Workshops: Macao, China, August 10–12, 2019, Revised Selected Best Papers
TLDR
This paper runs extensive experiments of recent models on real financial data, compares their performance deeply, and shows the usage of a completed knowledge graph in consumer banking sector. Expand
...
1
2
...

References

SHOWING 1-10 OF 55 REFERENCES
Give Me One Portrait Image, I Will Tell You Your Emotion and Personality
TLDR
An end-to-end trainable and deep Siamese-like network that can take one portrait photo as input and predict one's Big-Five apparent personality as well as emotion attributes and demonstrates the feasibility of inferring the apparent personality directly fro emotion. Expand
Interpreting CNN Models for Apparent Personality Trait Regression
TLDR
A deep study on understanding why CNN models are performing surprisingly well in this complex problem, using current techniques on CNN model interpretability, combined with face detection and Action Unit (AUs) recognition systems, to perform quantitative studies. Expand
Deep Bimodal Regression for Apparent Personality Analysis
TLDR
The Deep Bimodal Regression framework is proposed and a solution for the Apparent Personality Analysis competition track in the ChaLearn Looking at People challenge in association with ECCV 2016 is come up with. Expand
Deep Impression: Audiovisual Deep Residual Networks for Multimodal Apparent Personality Trait Recognition
TLDR
An audiovisual deep residual network for multimodal apparent personality trait recognition that is trained end-to-end for predicting the Big Five personality traits of people from their videos. Expand
Bi-modal First Impressions Recognition Using Temporally Ordered Deep Audio and Stochastic Visual Features
TLDR
A novel approach for First Impressions Recognition in terms of the Big Five personality-traits from short videos using bi-modal end-to-end deep neural network architectures using temporally ordered audio and novel stochastic visual features from few frames, without over-fitting. Expand
Estimation of Affective Level in the Wild with Multiple Memory Networks
  • Jianshu Li, Y. Chen, +5 authors T. Sim
  • Computer Science
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
  • 2017
This paper presents the proposed solution to the "affect in the wild" challenge, which aims to estimate the affective level, i.e. the valence and arousal values, of every frame in a video. AExpand
Predicting the Sixteen Personality Factors (16PF) of an individual by analyzing facial features
TLDR
It is shown that there is a significant relationship between the emotions elicited to the analyzed subjects and high prediction accuracy obtained for each of the 16 personality traits as well as notable correlations between distinct sets of AUs present at high-intensity levels and increased personality trait prediction accuracy. Expand
Multimodal emotion recognition using deep learning architectures
TLDR
A database of multimodal recordings of actors enacting various expressions of emotions, which consists of audio and video sequences of actors displaying three different intensities of expressions of 23 different emotions along with facial feature tracking, skeletal tracking and the corresponding physiological data is presented. Expand
Deep learning for robust feature generation in audiovisual emotion recognition
TLDR
A suite of Deep Belief Network models are proposed and evaluated, and it is demonstrated that these models show improvement in emotion classification performance over baselines that do not employ deep learning, suggesting that the learned high-order non-linear relationships are effective for emotion recognition. Expand
Facial Affect Estimation in the Wild Using Deep Residual and Convolutional Networks
  • Behzad Hassani, M. Mahoor
  • Computer Science
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
  • 2017
TLDR
Three neural network-based methods based on Inception-ResNet modules redesigned specifically for the task of facial affect estimation submitted to the First Affect-in-the-Wild challenge are presented. Expand
...
1
2
3
4
5
...