• Corpus ID: 5873873

Music Mood Representations from Social Tags

@inproceedings{Laurier2009MusicMR,
  title={Music Mood Representations from Social Tags},
  author={Cyril Laurier and Mohamed Sordo and Joan Serr{\`a} and Perfecto Herrera},
  booktitle={ISMIR},
  year={2009}
}
ABSTRACTThis paper presents ndings about mood representations.We aim to analyze how do people tag music by mood, tocreate representations based on this data and to study theagreement between experts and a large community. Forthispurpose,wecreateasemanticmoodspacefromlast.fmtags using Latent Semantic Analysis. With an unsuper-vised clustering approach, we derive from this space anideal categorical representation. We compare our commu-nitybasedsemanticspacewithexpertrepresentationsfromHevner and… 

Figures and Tables from this paper

Semantic Computing of Moods Based on Tags in Social Media of Music
TLDR
A novel technique called Affective Circumplex Transformation (ACT) is proposed for representing the moods of music tracks in an interpretable and robust fashion based on semantic computing of social tags and research in emotion modeling, and its performance is robust against a low number of track-level mood tags.
Semantic models of musical mood: Comparison between crowd-sourced and curated editorial tags
TLDR
The use of curated editorial data provides a statistically significant improvement compared to crowd-sourced data for predicting moods perceived in music and is correlated with listener ratings of arousal, valence and tension.
The Role of Audio and Tags in Music Mood Prediction: A Study Using Semantic Layer Projection
TLDR
This work analytically demonstrate the benefit of using a combination of semantic tags and audio features in automatic mood annotation, and shows that audio is in general more efficient in predicting perceived mood than tags.
Timbral Qualities of Semantic Structures of Music
TLDR
It is hypothesized that the organization of these structures are rather directly linked with the ”qualia” of the music as sound, and imply that meaningful organization of music may be derived from low-level descriptions of the excerpts.
Music Social Tags Representation in Dimensional Emotion Models
  • Na HeSam Ferguson
  • Computer Science
    2020 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom)
  • 2020
TLDR
This paper proposes an effective solution to analyse music tag information and represent them in a 2-dimension emotion plane without limiting the corpus to contain only emotion terms and compares these methods with traditional Latent Semantic Analysis model based on Procrustes Analysis evaluation metrics.
Mood Classification of Hindi Songs based on Lyrics
TLDR
This paper proposed mood taxonomy for Hindi songs and prepared a mood annotated lyrics corpus based on this taxonomy and developed a supervised system to identify the sentiment of the Hindi song lyrics based on the above features.
Music mood annotation using semantic computing and machine learning
TLDR
Improvements to the generalizability of audio-based mood annotation performance were sought using audio feature selection and a proposed technique termed as Semantic Layer Projection (SLP) that efficiently incorporates large-scale tag data.
Identifying Accuracy of Social Tags by Using Clustering Representations of Song Lyrics
  • Yajie HuM. Ogihara
  • Computer Science
    2012 11th International Conference on Machine Learning and Applications
  • 2012
TLDR
This paper presents a new framework to identify accurate social tags of songs, and applies an improved hierarchical clustering algorithm to group the tags to build a tag category and classify music songs using lyrics.
A Dimensional Contextual Semantic Model for music description and retrieval
TLDR
A Dimensional Contextual Semantic Model for defining semantic relations among descriptors in a context-aware fashion is proposed and used for developing a semantic music search engine.
The mood of Chinese Pop music: Representation and recognition
TLDR
A new data set consisting of 818 Chinese Pop songs, 3 complete sets of mood annotations in both categorical and dimensional representations, as well as audio features corresponding to 5 distinct categories of musical characteristics is constructed, providing benchmarks for future research on MMR of non‐Western music.
...
...

References

SHOWING 1-10 OF 15 REFERENCES
A Semantic Space for Music Derived from Social Tags
TLDR
This paper shows that, despite the ad hoc and informal language of tagging, tags define a low-dimensional semantic space that is extremely well-behaved at the track level, in particular being highly organised by artist and musical genre.
A Regression Approach to Music Emotion Recognition
TLDR
This paper forms MER as a regression problem to predict the arousal and valence values (AV values) of each music sample directly and applies the regression approach to detect the emotion variation within a music selection and find the prediction accuracy superior to existing works.
The Quest for Musical Genres: Do the Experts and the Wisdom of Crowds Agree?
TLDR
A multi–faceted approach for musical genre using expert based classifications, dynamic associations derived from the wisdom of crowds, and content–based analysis can improve genre classification, as well as other relevant MIR tasks such as music similarity or music recommendation.
The 2007 MIREX Audio Mood Classification Task: Lessons Learned
TLDR
Important issues in setting up the AMC task are described, dataset construction and ground-truth labeling are analyzed, and human assessments on the audio dataset, as well as system performances from various angles are analyzed.
The music information retrieval evaluation exchange (2005-2007): A window into music information retrieval research
TLDR
The background, structure, challenges, and contributions of MIREX are looked at and it is indicated that there are groups of systems that perform equally well within various MIR tasks.
Automatic Detection of Emotion in Music: Interaction with Emotionally Sensitive Machines
Creating emotionally sensitive machines will significantly enhance the interaction between humans and machines. In this chapter we focus on enabling this ability for music. Music is extremely
Indexing by Latent Semantic Analysis
TLDR
A new method for automatic indexing and retrieval to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries.
Detecting emotion in music
TLDR
Since the preeminent functions of music are social and psychological, the most useful characterization would be based on four types of information: the style, emotion, genre, and similarity.
Data mining: practical machine learning tools and techniques with Java implementations
TLDR
This presentation discusses the design and implementation of machine learning algorithms in Java, as well as some of the techniques used to develop and implement these algorithms.
Music and emotion: Theory and research
That music has an incredible power to move us emotionally is without question. Whether performing music, listening to music, or creating music, this bond with our emotions is always there. The natu
...
...