Exploiting Multi-CNN Features in CNN-RNN Based Dimensional Emotion Recognition on the OMG in-the-Wild Dataset

  title={Exploiting Multi-CNN Features in CNN-RNN Based Dimensional Emotion Recognition on the OMG in-the-Wild Dataset},
  author={Dimitrios Kollias and Stefanos Zafeiriou},
  journal={IEEE Transactions on Affective Computing},
This article presents a novel CNN-RNN based approach, which exploits multiple CNN features for dimensional emotion recognition in-the-wild, utilizing the One-Minute Gradual-Emotion (OMG-Emotion) dataset. Our approach includes first pre-training with the relevant and large in size, Aff-Wild and Aff-Wild2 emotion databases. Low-, mid- and high-level features are extracted from the trained CNN component and are exploited by RNN subnets in a multi-task framework. Their outputs constitute an… 

Deep Auto-Encoders With Sequential Learning for Multimodal Dimensional Emotion Recognition

This paper proposes a novel deep neural network architecture consisting of a two-stream auto-encoder and a long short term memory for effectively integrating visual and audio signal streams for emotion recognition that achieves state-of-the-art recognition performance.

AI-MIA: COVID-19 Detection & Severity Analysis through Medical Imaging

The baseline approach consists of a deep learning approach, based on a CNN-RNN network and report its performance on the COVID19-CT-DB database, which is annotated for COVID-19 detction, consisting of about 7,700 3-D CT scans.

Affective Processes: stochastic modelling of temporal context for emotion and facial expression recognition

This work builds upon the framework of Neural Processes to propose a method for apparent emotion recognition with three key novel components: probabilistic contextual representation with a global latent variable model; temporal context modelling using task-specific predictions in addition to features; and smart temporal context selection.

Mosquito Type Identification using Convolution Neural Network

A Deep Learning model is developed which can predict the type of mosquito with great accuracy and the highest accuracy of the model is produced during training and it is 99.2%.

Study on Smoking and Telephone Behaviour Detection Based on Convolutional Neural Network

  • Jiahong Li
  • Computer Science
    2022 International Conference on Machine Learning and Intelligent Systems Engineering (MLISE)
  • 2022
This paper connects the trained convolutional neural network model with the camera to detect people’s behavior in the camera in real time and shows that this algorithm has good accuracy.

Multiple Machine Learning Algorithms for Human Smoking Behavior Detection

  • Chenxin CuiRuofeng Xu
  • Computer Science
    2022 International Conference on Machine Learning and Intelligent Systems Engineering (MLISE)
  • 2022
Three models were proposed to find smoking behavior automatically and in CNN model the study got the best performance with an accuracy of 94.59% in testing dataset which consists of 522 images.

Development of a fake news detection tool for Vietnamese based on deep learning techniques

The tool was able to detect fake news quickly and easily with a correct rate of about 85 %.

Development of a Website for Malarial Detection using Deep Learning

This website can be used as a pre-screening test for malaria in times when a person cannot reach out to the nearest doctor and can be updated and converted as a software application in the future.

A Deep Learning-based Approach for Surface Crack Detection using Convolutional Neural Network

A deep learning network is created that can predict the existence of fractures just by scanning an image and is incredibly efficient when compared to other deep learning models in terms of accuracy, loss, and time taken for prediction.



Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling

These advanced recurrent units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU), are found to be comparable to LSTM.

Expression, Affect, Action Unit Recognition: Aff-Wild2, Multi-Task Learning and ArcFace

This work substantially extends the largest available in-the-wild database (Aff-Wild) to study continuous emotions such as valence and arousal and annotates parts of the database with basic expressions and action units, which allows the joint study of all three types of behavior states.

audEERING's approach to the One-Minute-Gradual Emotion Challenge

This paper describes audEERING's submissions as well as additional evaluations for the One-Minute-Gradual (OMG) emotion recognition challenge. We provide the results for audio and video processing on

A Deep Network for Arousal-Valence Emotion Prediction with Acoustic-Visual Cues

This paper comprehensively describes the methodology of the submissions to the One-Minute Gradual-Emotion Behavior Challenge 2018, which aims to improve the understanding of human emotion regulation in the one-minute period.

Multimodal Multi-task Learning for Dimensional and Continuous Emotion Recognition

This paper presents the effort for the Affect Subtask in the Audio/Visual Emotion Challenge (AVEC) 2017, which requires participants to perform continuous emotion prediction on three affective dimensions: Arousal, Valence and Likability based on the audiovisual signals and highlights three aspects of the solutions.

Deep Affect Prediction in-the-Wild: Aff-Wild Database and Challenge, Deep Architectures, and Beyond

The Aff-Wild benchmark for training and evaluating affect recognition algorithms is introduced and an end-to-end deep neural architecture which performs prediction of continuous emotion dimensions based on visual cues is designed and extensively train.

Analysing Affective Behavior in the First ABAW 2020 Competition

This paper describes the Affective Behavior Analysis in-the-wild 2020 Competition, the first Competition aiming at automatic analysis of the three main behavior tasks of valencearousal estimation, basic expression recognition and action unit detection, and presents the evaluation metrics.

Deep Neural Network Augmentation: Generating Faces for Affect Analysis

Qualitative experiments illustrate the generation of realistic images, when the neutral image is sampled from fifteen well known lab-controlled or in-the-wild databases, including Aff-Wild, AffectNet, RAF-DB; comparisons with generative adversarial networks (GANs) show the higher quality achieved by the proposed approach.

Artificial Neural Network for Diagnose Autism Spectrum Disorder

An Artificial Neural Network model, was developed and tested for diagnosing Autism Spectrum Disorder (ASD) and test data evaluation shows that the ANN model is able to correctly diagnose ASD with 100% accuracy.