• Publications
  • Influence
The INTERSPEECH 2009 emotion challenge
The challenge, the corpus, the features, and benchmark results of two popular approaches towards emotion recognition from speech, and the FAU Aibo Emotion Corpus are introduced. Expand
The first meeting of focal points jointly convened by the foreign ministries of Costa Rica, Denmark and Ghana, in association with the Global Centre for the to Protect, took place on 17 and 18 May 2011, with the purpose of setting up a functional R2P focal points network for the prevention and halting of mass atrocities. Expand
The INTERSPEECH 2013 computational paralinguistics challenge: social signals, conflict, emotion, autism
The INTERSPEECH 2013 Computational Paralinguistics Challenge provides for the first time a unified test-bed for Social Signals such as laughter in speech. It further introduces conflict in groupExpand
The INTERSPEECH 2010 paralinguistic challenge
The INTERSPEECH 2010 Paralinguistic Challenge shall help overcome the usually low compatibility of results, by addressing three selected sub-challenges, by address-ing three selected tasks. Expand
Recognising realistic emotions and affect in speech: State of the art and lessons learnt from the first challenge
The basic phenomenon reflecting the last fifteen years is addressed, commenting on databases, modelling and annotation, the unit of analysis and prototypicality and automatic processing including discussions on features, classification, robustness, evaluation, and implementation and system integration. Expand
The INTERSPEECH 2012 Speaker Trait Challenge
The EPFL-CONF-174360 data indicate that speaker Traits and Likability are influenced by the environment and the speaker’s personality in terms of paralinguistics and personality. Expand
How to find trouble in communication
The module Monitoring of User State [especially of] Emotion (MOUSE) is proposed in which a prosodic classifier is combined with other knowledge sources, such as conversationally peculiar linguistic behavior, for example, the use of repetitions. Expand
The HUMAINE Database: Addressing the Collection and Annotation of Naturalistic and Induced Emotional Data
The HUMAINE Database provides naturalistic clips which record that kind of material, in multiple modalities, and labelling techniques that are suited to describing it. Expand
The INTERSPEECH 2020 Computational Paralinguistics Challenge: Elderly Emotion, Breathing & Masks
The Sub-Challenges, baseline feature extraction, and classifiers based on the ‘usual’ COMPARE and BoAW features as well as deep unsupervised representation learning using the AUDEEP toolkit, and deep feature extraction from pre-trained CNNs using the DEEP SPECTRUM toolkit are described. Expand
The relevance of feature type for the automatic classification of emotional user states: low level descriptors and functionals
In this paper, we report on classification results for emotional user states (4 classes, German database of children interacting with a pet robot). Six sites computed acoustic and linguistic featuresExpand