Klaus R. Scherer

Learn More
Professional actors' portrayals of 14 emotions varying in intensity and valence were presented to judges. The results on decoding replicate earlier findings on the ability of judges to infer vocally expressed emotions with much-better-than-chance accuracy, including consistently found differences in the recognizability of different emotions. A total of 224(More)
The current state of research on emotion effects on voice and speech is reviewed and issues for future research efforts are discussed. In particular, it is suggested to use the Brunswikian lens model as a base for research on the vocal communication of emotion. This approach allows one to model the complete process, including both encoding (expression),(More)
Automatic Facial Expression Recognition and Analysis, in particular FACS Action Unit (AU) detection and discrete emotion detection, has been an active topic in computer science for over two decades. Standardisation and comparability has come some way; for instance, there exist a number of commonly used facial expression databases. However, lack of a common(More)
Morality dignifies and elevates. When Adam and Eve ate the forbidden fruit, God said "Behold, the man is become as one of us, to know good and evil" (Gen. 3:22). In many of the world's religious traditions, the good go up, to heaven or a higher rebirth, and the bad go down, to hell or a lower rebirth. Even among secular people, moral motives are spoken of(More)
We report two functional magnetic resonance imaging experiments showing enhanced responses in human middle superior temporal sulcus for angry relative to neutral prosody. This emotional enhancement was voice specific, unrelated to isolated acoustic amplitude or frequency cues in angry prosody, and distinct from any concomitant task-related attentional(More)
One reason for the universal appeal of music lies in the emotional rewards that music offers to its listeners. But what makes these rewards so special? The authors addressed this question by progressively characterizing music-induced emotions in 4 interrelated studies. Studies 1 and 2 (n=354) were conducted to compile a list of music-relevant emotion terms(More)
The INTERSPEECH 2013 Computational Paralinguistics Challenge provides for the first time a unified test-bed for Social Signals such as laughter in speech. It further introduces conflict in group discussions as a new task and deals with autism and its manifestations in speech. Finally, emotion is revisited as task, albeit with a broader range of overall(More)
For more than half a century, emotion researchers have attempted to establish the dimensional space that most economically accounts for similarities and differences in emotional experience. Today, many researchers focus exclusively on two-dimensional models involving valence and arousal. Adopting a theoretically based approach, we show for three languages(More)