Learn More
To study relations between speech and emotion, it is necessary to have methods of describing emotion. Finding appropriate methods is not straightforward, and there are difficulties associated with the most familiar. The word emotion itself is problematic: a narrow sense is often seen as ''correct'', but it excludes what may be key areas in relation to(More)
Mood disorders are inherently related to emotion. In particular, the behaviour of people suffering from mood disorders such as unipolar depression shows a strong temporal correlation with the affective dimensions valence, arousal and dominance. In addition to structured self-report questionnaires, psychologists and psychiatrists use in their evaluation of a(More)
SEMAINE has created a large audiovisual database as a part of an iterative approach to building Sensitive Artificial Listener (SAL) agents that can engage a person in a sustained, emotionally colored conversation. Data used to build the agents came from interactions between users and an "operator” simulating a SAL agent, in different configurations:(More)
FEELTRACE is an instrument developed to let observers track the emotional content of a stimulus as they perceive it over time, allowing the emotional dynamics of speech episodes to be examined. It is based on activation-evaluation space, a representation derived from psychology. The activation dimension measures how dynamic the emotional state is; the(More)
Research on speech and emotion is moving from a period of exploratory research into one where there is a prospect of substantial applications, notably in human–computer interaction. Progress in the area relies heavily on the development of appropriate databases. This paper addresses four main issues that need to be considered in developing databases of(More)
The Audio/Visual Emotion Challenge and Workshop (AVEC 2011) is the first competition event aimed at comparison of multimedia processing and machine learning methods for automatic audio, visual and audiovisual emotion analysis, with all participants competing under strictly the same conditions. This paper first describes the challenge participation(More)
Automatic recognition of a speaker's emotions is a natural objective for research, but is difficult to gauge the level of performance that is currently attainable. We describe a study that offers a rough benchmark. Speech data came from five passages of about 100 syllables each. They had been selected following pilot studies because they were effective at(More)
There has been rapid development in conceptions of the kind of database that is needed for emotion research. Familiar archetypes are still influential, but the state of the art has moved beyond them. There is concern to capture emotion as it occurs in action and interaction ('pervasive emotion') as well as in short episodes dominated by emotion, and(More)
The second international Audio/Visual Emotion Challenge and Workshop 2012 (AVEC 2012) is introduced shortly. 34 teams from 12 countries signed up for the Challenge. The SEMAINE database serves for prediction of four-dimensional continuous affect in audio and video. For the eligible participants, final scores for the Fully-Continuous Sub-Challenge ranged(More)
We have recorded a new corpus of emotionally coloured conversations. Users were recorded while holding conversations with an operator who adopts in sequence four roles designed to evoke emotional reactions. The operator and the user are seated in separate rooms; they see each other through teleprompter screens, and hear each other through speakers. To allow(More)