Music Mood Representations from Social Tags

Abstract

This paper presents findings about mood representations. We aim to analyze how do people tag music by mood, to create representations based on this data and to study the agreement between experts and a large community. For this purpose, we create a semantic mood space from last.fm tags using Latent Semantic Analysis. With an unsupervised clustering approach, we derive from this space an ideal categorical representation. We compare our community based semantic space with expert representations from Hevner and the clusters from the MIREX Audio Mood Classification task. Using dimensional reduction with a Self-Organizing Map, we obtain a 2D representation that we compare with the dimensional model from Russell. We present as well a tree diagram of the mood tags obtained with a hierarchical clustering approach. All these results show a consistency between the community and the experts as well as some limitations of current expert models. This study demonstrates a particular relevancy of the basic emotions model with four mood clusters that can be summarized as: happy, sad, angry and tender. This outcome can help to create better ground truth and to provide more realistic mood classification algorithms. Furthermore, this method can be applied to other types of representations to build better computational models.

Extracted Key Phrases

11 Figures and Tables

05101520102011201220132014201520162017
Citations per Year

58 Citations

Semantic Scholar estimates that this publication has 58 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Laurier2009MusicMR, title={Music Mood Representations from Social Tags}, author={Cyril Laurier and Mohamed Sordo and Joan Serr{\`a} and Perfecto Herrera}, booktitle={ISMIR}, year={2009} }