Corpus ID: 13947149

OMG - Emotion Challenge Solution

  title={OMG - Emotion Challenge Solution},
  author={Yuqi Cui and Xiao Zhang and Yang Wang and Chenfeng Guo and Dongrui Wu},
This short paper describes our solution to the 2018 IEEE World Congress on Computational Intelligence One-Minute Gradual-Emotional Behavior Challenge, whose goal was to estimate continuous arousal and valence values from short videos. We designed four base regression models using visual and audio features, and then used a spectral approach to fuse them to obtain improved performance. 
1 Citations
Arousal and Valence Estimation for Visual Non-Intrusive Stress Monitoring
A deep learning-based psychological stress level estimation approach to identify the region where the emotional state of the operator projects in the space defined by the latent dimensional emotions of arousal and valence. Expand


Feature Dimensionality Reduction for Video Affect Classification: A Comparative Study
  • Chenfeng Guo, Dongrui Wu
  • Computer Science, Mathematics
  • 2018 First Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia)
  • 2018
This paper presents a preliminary study on dimensionality reduction for affect classification, showing that no approach can universally outperform others, and performing classification using the raw features directly may not always be a bad choice. Expand
Developing crossmodal expression recognition based on a deep neural model
A model is proposed that simulates the innate perception of audio–visual emotion expressions with deep neural networks, that learns new expressions by categorizing them into emotional clusters with a self-organizing layer and is compared to state-of-the-art research. Expand
Spectral meta-learner for regression (SMLR) model aggregation: Towards calibrationless brain-computer interface (BCI)
This paper proposes a novel spectral meta-learner for regression (SMLR) approach, which optimally combines base regression models built from labeled data from auxiliary subjects to label offline EEG data from a new subject, and significantly outperforms three state-of-the-art regression model fusion approaches. Expand
Beyond short snippets: Deep networks for video classification
This work proposes and evaluates several deep neural network architectures to combine image information across a video over longer time periods than previously attempted, and proposes two methods capable of handling full length videos. Expand
Classification of general audio data for content-based retrieval
This work describes a scheme that is able to classify audio segments into seven categories consisting of silence, single speaker speech, music, environmental noise, multiple speakers' speech, simultaneous speech and music, and speech and noise, and shows that cepstral-based features such as the Mel-frequency cep stral coefficients (MFCC) and linear prediction coefficients (LPC) provide better classification accuracy compared to temporal and spectral features. Expand
Mixed Type Audio Classification with Support Vector Machine
A mixed type audio classification system based on support vector machine (SVM) that can classify audio data into five types: music, speech, environment sound, speech mixed with music, and music mixed with environment sound is presented. Expand
Rethinking the Inception Architecture for Computer Vision
This work is exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. Expand
Xception: Deep Learning with Depthwise Separable Convolutions
  • François Chollet
  • Computer Science, Mathematics
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
This work proposes a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions, and shows that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset, and significantly outperforms it on a larger image classification dataset. Expand
Theoretical and Empirical Analysis of ReliefF and RReliefF
How and why Relief algorithms work, their theoretical and practical properties, their parameters, what kind of dependencies they detect, how do they scale up to large number of examples and features, how to sample data for them, how robust are they regarding the noise, how irrelevant and redundant attributes influence their output and how different metrics influences them. Expand