Emotion tracking in music using continuous conditional random fields and relative feature representation

Abstract

Digitization of how people acquire music calls for better music information retrieval techniques, and dimensional emotion tracking is increasingly seen as an attractive approach. Unfortunately, the majority of models we still use are borrowed from other problems that do not suit emotion prediction well, as most of them tend to ignore the temporal dynamics present in music and/or the continuous nature of ArousalValence space. In this paper we propose the use of Continuous Conditional Random Fields for dimensional emotion tracking and a novel feature vector representation technique. Both approaches result in a substantial improvement on both rootmean-squared error and correlation, for both short and long term measurements. In addition, they can both be easily extended to multimodal approaches to music emotion recognition.

DOI: 10.1109/ICMEW.2013.6618357

Extracted Key Phrases

2 Figures and Tables

Cite this paper

@inproceedings{Imbrasaite2013EmotionTI, title={Emotion tracking in music using continuous conditional random fields and relative feature representation}, author={Vaiva Imbrasaite and Tadas Baltrusaitis and Peter Robinson}, booktitle={ICME Workshops}, year={2013} }