Learn More
Spoken dialogue systems are increasingly being used to facilitate and enhance human communication. While these interactive systems can process the linguistic aspects of human communication, they are not yet capable of processing the complex dynamics involved in social interaction, such as the adaptation on the part of interlocutors. Providing interactive(More)
This paper presents methodologies and tools for language resource (LR) construction. It describes a database of interactive speech collected over a three-month period at the Science Gallery in Dublin, where visitors could take part in a conversation with a robot. The system collected samples of informal, chatty dialogue – normally difficult to capture under(More)
• to make derivative works Under the following conditions: • Attribution. You must give the original author credit. • Non-Commercial. You may not use this work for commercial purposes. • Share Alike. If you alter, transform, or build upon this work, you may distribute the resulting work only under a license identical to this one. For any reuse or(More)
Prosodic synchrony has been reported to be an important aspect of conversational dyads. In this paper, synchrony in four different dyads is examined. A Time Aligned Moving Average (TAMA) procedure is used to temporally align the prosodic measurements for the detection of synchrony in the dyads. An overlapping windowed correlation procedure is used to(More)
Acoustic/prosodic feature (a/p) convergence has been known to occur both in dialogues between humans, as well as in human-computer interactions. Understanding the form and function of convergence is desirable for developing next generation conversational agents, as this will help increase speech recognition performance and naturalness of synthesized speech.(More)
Induced emotions (chairman A. Batliner) 9.05 The Sensitive Artificial Listener: an induction technique for generating emotionally coloured conversation. Acted versus spontaneous emotions (chairman R. Cowie) 10.45 Anger detection performances based on prosodic and acoustic cues in several corpora Laurence Vidrascu, Laurence Devillers, LIMSI-CNRS, France(More)
Our research in emotional speech analysis has led to the construction of dedicated high quality, online corpora of natural emotional speech assets. Once obtained, the annotation and analysis of these assets was necessary in order to develop a database of both analysis data and metadata relating to each speech act. With annotation complete, the means by(More)
Research into the acoustic correlates of emotional speech as part of the SALERO project has led to the construction of high quality emotional speech corpora, which contain both IMDI metadata and acoustic analysis data for each asset. Research into semi-automated, re-usable character animation has considered the development of online workflows based on(More)
Our research in emotional speech analysis has led to the construction of several dedicated high quality, online corpora of natural emotional speech assets. The requirements for querying, retrieval and organization of assets based on both their metadata descriptors and their analysis data led to the construction of a suitable interface for data visualization(More)