Learn More
Detecting emotions in the context of automated call center services can be helpful for following the evolution of the human-computer dialogs, enabling dynamic modification of the dialog strategies and influencing the final outcome. The emotion detection work reported here is a part of larger study aiming to model user behavior in real interactions. We make(More)
This paper addresses the issue of automatic emotion recognition in speech. We focus on a type of emotional manifestation which has been rarely studied in speech processing: fear-type emotions occurring during abnormal situations (here, unplanned events where human life is threatened). This study is dedicated to a new application in emotion recognition –(More)
Recent work on emotional speech processing has demonstrated the interest to consider the information conveyed by the emotional component in speech to enhance the understanding of human behaviors. But to date, there has been little integration of emotion detection systems in effective applications. The present research focuses on the development of a(More)
The present research focuses on analyzing and detecting emotions in speech as revealed by task-dependent spoken dialogs corpora. Previously, we have conducted several experiments on a real-life corpus in order to develop a reliable annotation method and to detect lexical and prosodic cues correlated to the main emotion class. In this paper we evaluate both(More)
This paper reports on an analysis of prosodic cues for emotion characterization in 100 natural spoken dialogs recorded at a telephone customer service center. The corpus annotated with task-dependent emotion tags which were validated by a perceptual test. Two F0 range parameters , one at the sentence level and the other at the sub-segment level, emerge as(More)
It is widely acknowledged that human listeners significantly outperform machines when it comes to transcribing speech. This paper presents a paradigm for perceptual experiments that aims to increase our understanding of human and automatic speech recognition errors. The role of the context length is investigated through perceptual recovery of small(More)
Are acoustic differences in autonomous fillers salient for the human perception ? Acoustic measurements have been carried out on autonomous fillers from eight languages Spanish). They exhibit timbre differences of the support vowel of autonomous fillers across languages. In order to evaluate their salience for human perception, two discrimination(More)
This paper deals with the overview of the methods in perceptual language identification and the suggestion of a new approach based on a two-step methodology integrating to perception " genetic " considerations and resulting into the modeling of perceptually identified discriminative cues. The first study reported here concerns experimental designs for(More)
This article compares the errors made by automatic speech rec-ognizers to those made by humans for near-homophones in American English and French. This exploratory study focuses on the impact of limited word context and the potential resulting ambiguities for automatic speech recognition (ASR) systems and human listeners. Perceptual experiments using 7-gram(More)