• Publications
  • Influence
The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English
TLDR
The RAVDESS is a validated multimodal database of emotional speech and song consisting of 24 professional actors, vocalizing lexically-matched statements in a neutral North American accent, which shows high levels of emotional validity and test-retest intrarater reliability.
Changing Musical Emotion: A Computational Rule System for Modifying Score and Performance
TLDR
CMERS, a Computational Music Emotion Rule System for the control of perceived musical emotion that modifies features at the levels of score and performance in real-time is presented.
Dynamic response: real-time adaptation for music emotion
Music plays an enormous role in today's computer games; it serves to elicit emotion, generate interest and convey important information. Traditional gaming music is fixed at the event level, where
Controlling musical emotionality: an affective computational architecture for influencing musical emotions
TLDR
An affective computing architecture for the dynamic modification of music with a view to predictably affecting induced musical emotions: its emotionality is discussed.
The emergence of music from the Theory of Mind
It is commonly argued that music originated in human evolution as an adaptation to selective pressures. In this paper we present an alternative account in which music originated from a more general
Deficits in the Mimicry of Facial Expressions in Parkinson's Disease
TLDR
Patients showed decreased mimicry overall, mimicking other peoples' frowns to some extent, but presenting with profoundly weakened and delayed smiles, which open a new avenue of inquiry into the “masked face” syndrome of PD.
Common cues to emotion in the dynamic facial expressions of speech and song
TLDR
Three experiments compared moving facial expressions in speech and song, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech.
Body sway reflects leadership in joint music performance
TLDR
It is demonstrated that musician assigned as leaders affect other performers more than musicians assigned as followers, and information sharing in a nonverbal joint action task occurs through both auditory and visual cues.
Automatic detection of expressed emotion in Parkinson's Disease
TLDR
The classification of emotional speech in patients with PD and the classification of PD speech is examined to assist in the future development of automated early detection systems for diagnosing patients withPD.
...
...