The eNTERFACE’05 Audio-Visual Emotion Database

@article{Martin2006TheEA,
  title={The eNTERFACE\&\#146;05 Audio-Visual Emotion Database},
  author={Olivier Martin and Irene Kotsia and Benoit M. Macq and Ioannis Pitas},
  journal={22nd International Conference on Data Engineering Workshops (ICDEW'06)},
  year={2006},
  pages={8-8}
}
This paper presents an audio-visual emotion database that can be used as a reference database for testing and evaluating video, audio or joint audio-visual emotion recognition algorithms. Additional uses may include the evaluation of algorithms performing other multimodal signal processing tasks, such as multimodal person identification or audio-visual speech recognition. This paper presents the difficulties involved in the construction of such a multimodal emotion database and the different… 

Figures and Tables from this paper

Multi-modal Emotion Recognition Using Canonical Correlations and Acoustic Features

An approach to multi-modal (audio-video) emotion recognition system that does not rely on the tracking of specific facial landmarks and thus, eliminates the problems usually caused, if the tracking algorithm fails at detecting the correct area.

A Turkish audio-visual emotional database

This work presents a re-acted audio-visual database in Turkish, consisting of recordings of subjects expressing various emotional and mental states, and aims to elicit several mental states such as unsure, undecided, thinking, concentrating, interested, and complaining.

The New Italian Audio and Video Emotional Database

The general specifications and characteristics of the New Italian Audio and Video Emotional Database are described, collected to improve the COST 2102 and to support the research effort of theCOST Action 2102: “Cross Modal Analysis of Verbal and Nonverbal Communication” (http://cost2102.cs.stir.ac.uk/).

Multimodal emotion recognition with automatic peak frame selection

  • Sara ZhalehpourZ. AkhtarÇ. Erdem
  • Computer Science
    2014 IEEE International Symposium on Innovations in Intelligent Systems and Applications (INISTA) Proceedings
  • 2014
The main steps of the proposed framework consists of extraction of video and audio features based on peak frame selection, unimodal classification and decision level fusion of audio and visual results.

Fusion of classifier predictions for audio-visual emotion recognition

A novel multimodal emotion recognition system which is based on the analysis of audio and visual cues, and summarise each emotion video into a reduced set of key-frames, which are learnt in order to visually discriminate emotions by means of a Convolutional Neural Network.

Searching Audio-Visual Clips for Dual-mode Chinese Emotional Speech Database

  • Xudong ZhangGuoqing WuF. Ren
  • Computer Science
    2018 First Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia)
  • 2018
A new method of constructing a Chinese audio-visual spontaneous emotional speech database with abundant spontaneous speeches is presented and it is shown that the proposed method is feasible and effective.

MSP-Face Corpus: A Natural Audiovisual Emotional Database

This study presents the MSP-Face database, a natural audiovisual database obtained from video-sharing websites, where multiple individuals discuss various topics expressing their opinions and experiences, offering the perfect infrastructure to explore semi-supervised and unsupervised machine-learning algorithms on natural emotional videos.

Analysis and Assessment of AvID: Multi-Modal Emotional Database

A new time-weighted free-marginal kappa is presented, which differs from the other kappa statistics in that it weights each utterance's particular score of agreement based on the duration of the utterance.

A Methond of Building Phoneme-Level Chinese Audio-Visual Emotional Database

This paper establishes a Chinese multi-modal emotion corpus that contains 2480 emotional videos / audios and 74 phonemes with approximately 115,000 phoneme fragments, and is the first phoneme-level multi- modal Chinese emotional database.

Towards multimodal emotion recognition: a new approach

This paper presents the latest development in the emotion recognition part of SAMMI by mean of an extensive study on feature selection and the application of many of the principles presented in [17] and [15].
...

References

SHOWING 1-10 OF 10 REFERENCES

A new emotion database: considerations, sources and scope

Research on the expression of emotion is underpinned by databases. Reviewing available resources persuaded us of the need to develop one that prioritised ecological validity. The basic unit of the

Analysis of an emotional speech corpus in Hebrew based on objective criteria This volume

The initial results show that when tested on a Hebrew emotional speech corpus the proposed method yields reliable results.

Comprehensive database for facial expression analysis

  • T. KanadeYing-li TianJ. Cohn
  • Psychology
    Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)
  • 2000
The problem space for facial expression analysis is described, which includes level of description, transitions among expressions, eliciting conditions, reliability and validity of training and test data, individual differences in subjects, head orientation and scene complexity image characteristics, and relation to non-verbal behavior.

Multimodal caricatural mirror

This project aims at creating a caricatural mirror where users could see their own emotions amplified (image+speech) by an avatar (mainly facial animation), on a wide screen facing them. It includes

Unmasking The Face

The AR face databasae

The AR face database

Laor: “Analysis of an emotional speech corpus in Hebrew based on objective criteria

  • Proceedings of the ISCA Workshop on Speech and Emotion (pp. 29–33),
  • 2000