• Corpus ID: 1122482

The INTERSPEECH 2013 computational paralinguistics challenge: social signals, conflict, emotion, autism

@inproceedings{Schuller2013TheI2,
  title={The INTERSPEECH 2013 computational paralinguistics challenge: social signals, conflict, emotion, autism},
  author={Bj{\"o}rn Schuller and Stefan Steidl and Anton Batliner and Alessandro Vinciarelli and Klaus R. Scherer and Fabien Ringeval and Mohamed Chetouani and Felix Weninger and Florian Eyben and Erik Marchi and Marcello Mortillaro and Hugues Salamin and Anna Polychroniou and Fabio Valente and Samuel Kim},
  booktitle={INTERSPEECH},
  year={2013}
}
The INTERSPEECH 2013 Computational Paralinguistics Challenge provides for the first time a unified test-bed for Social Signals such as laughter in speech. It further introduces conflict in group discussions as a new task and deals with autism and its manifestations in speech. Finally, emotion is revisited as task, albeit with a broader range of overall twelve enacted emotional states. In this paper, we describe these four Sub-Challenges, their conditions, baselines, and a new feature set by the… 

Tables from this paper

The INTERSPEECH 2014 computational paralinguistics challenge: cognitive & physical load

TLDR
These two Sub-Challenges, their conditions, baseline results and experimental procedures, as well as the COMPARE baseline features generated with the openSMILE toolkit and provided to the participants in the Challenge are described.

The INTERSPEECH 2015 computational paralinguistics challenge: nativeness, parkinson's & eating condition

TLDR
Three sub-challenges are described: the estimation of the degree of nativeness, the neurological state of patients with Parkinson’s condition, and the eating conditions of speakers, i.

The INTERSPEECH 2018 Computational Paralinguistics Challenge: Atypical & Self-Assessed Affect, Crying & Heart Beats

TLDR
The Sub-Challenges are described, their conditions, and baseline feature extraction and classifiers, which include data-learnt (supervised) feature representations by end-to-end learning, the ‘usual’ ComParE and BoAW features, and deep unsupervised representation learning using the AUDEEP toolkit for the first time in the challenge series.

The INTERSPEECH 2020 Computational Paralinguistics Challenge: Elderly Emotion, Breathing & Masks

TLDR
The Sub-Challenges, baseline feature extraction, and classifiers based on the ‘usual’ COMPARE and BoAW features as well as deep unsupervised representation learning using the AUDEEP toolkit, and deep feature extraction from pre-trained CNNs using the DEEP SPECTRUM toolkit are described.

The INTERSPEECH 2016 Computational Paralinguistics Challenge: Deception, Sincerity & Native Language

The INTERSPEECH 2016 Computational Paralinguistics Challenge addresses three different problems for the first time in research competition under well-defined conditions: classification of deceptive

The INTERSPEECH 2017 Computational Paralinguistics Challenge: Addressee, Cold & Snoring

TLDR
These sub-challenges, their conditions, and the baseline feature extraction and classifiers are described, which include data-learnt feature representations by end-to-end learning with convolutional and recurrent neural networks, and bag-of-audiowords for the first time in the challenge series.

Detecting autism, emotions and social signals using adaboost

TLDR
This paper treats sub-challenges of paralinguistic detection, categorizing whole (albeit short) recordings by speaker emotion, conflict or the presence of development disorders (autism) as general classification tasks and applies the general-purpose machine learning meta-algorithm, AdaBoost.MH, and its recently proposed variant, Ada boost.BA, to them.

The INTERSPEECH 2019 Computational Paralinguistics Challenge: Styrian Dialects, Continuous Sleepiness, Baby Sounds & Orca Activity

TLDR
The Sub-Challenges and baseline feature extraction and classifiers are described, which include data-learnt (supervised) feature representations by the ‘usual’ ComParE and BoAW features, and deep unsupervised representation learning using the AUDEEP toolkit.

The ACII 2022 Affective Vocal Bursts Workshop & Competition: Understanding a critically understudied modality of emotional expression

TLDR
The four tracks and baseline systems, which use state-of-the-art machine learning methods, and participants should recognize the type of vocal burst as an 8-class classification are described.

Typicality and emotion in the voice of children with autism spectrum condition: evidence across three languages

TLDR
This work evaluates automatic diagnosis and recognition of emotions in atypical childrens voice over the nine categories including binary valence/arousal discrimination, inducing nine emotion categories embedded in short-stories.
...

References

SHOWING 1-10 OF 29 REFERENCES

Spontaneous-Speech Acoustic-Prosodic Features of Children with Autism and the Interacting Psychologist

TLDR
It is demonstrated that acoustic-prosodic features of both participants correlate with the children’s rated autism severity and the importance of jointly modeling the psychologist's vocal behavior in this dyadic interaction is introduced.

Paralinguistics in speech and language - State-of-the-art and the challenge

Medium-term speaker states - A review on intoxication, sleepiness and the first challenge

Introducing the Geneva Multimodal expression corpus for experimental research on emotion perception.

TLDR
A new, dynamic, multimodal corpus of emotion expressions, the Geneva Multimodal Emotion Portrayals Core Set (GEMEP-CS), is introduced and an associated database with microcoded facial, vocal, and body action elements, as well as observer ratings, is introduced.

DESPERATELY SEEKING EMOTIONS OR ACTORS WIZARDS AND HUMAN BEINGS

Automatic dialogue systems used in call-centers, for instance, should be able to determine in a critical phase of the dialogue indicated by the costumers vocal expression of anger/irritation when it

From Nonverbal Cues to Perception: Personality and Social Attractiveness

TLDR
This article considers the phenomenon in two zero acquaintance scenarios: the first is the attribution of personality traits to speakers the authors listen to for the first time, the second is the social attractiveness of unacquainted people with whom they talk on the phone.

Laughter in Conversation: Features of Occurrence and Acoustic Structure

Although human laughter mainly occurs in social contexts, most studies have dealt with laughter evoked by media. In our study, we investigated conversational laughter. Our results show that laughter

The acoustic features of human laughter.

TLDR
Recording of naturally produced laugh bouts recorded from 97 young adults as they watched funny video clips revealed evident diversity in production modes, remarkable variability in fundamental frequency characteristics, and consistent lack of articulation effects in supralaryngeal filtering are of particular interest.

Computational Paralinguistics: Emotion, Affect and Personality in Speech and Language Processing

This book presents the methods, tools and techniques that are currently being used to recognise (automatically) the affect, emotion, personality and everything else beyond linguistics

The INTERSPEECH 2012 Speaker Trait Challenge

TLDR
The EPFL-CONF-174360 data indicate that speaker Traits and Likability are influenced by the environment and the speaker’s personality in terms of paralinguistics and personality.