Music Mood Annotator Design and Integration

@article{Laurier2009MusicMA,
  title={Music Mood Annotator Design and Integration},
  author={Cyril Laurier and Owen Meyers and Joan Serr{\`a} and Martin Blech and Perfecto Herrera},
  journal={2009 Seventh International Workshop on Content-Based Multimedia Indexing},
  year={2009},
  pages={156-161}
}
  • C. Laurier, O. Meyers, P. Herrera
  • Published 3 June 2009
  • Computer Science
  • 2009 Seventh International Workshop on Content-Based Multimedia Indexing
A robust and efficient technique for automatic music mood annotation is presented. A song's mood is expressed by a supervised machine learning approach based on musical features extracted from the raw audio signal. A ground truth, used for training, is created using both social network information systems and individual experts. Tests of 7 different classification configurations have been performed, showing that Support Vector Machines perform best for the task at hand. Moreover, we evaluate… 

Figures and Tables from this paper

Active Learning for User-Tailored Refined Music Mood Detection
TLDR
This thesis, which is built on top of the work by Cyril Laurier and Perfecto Herrera in the Music Technology Group, deals with the need for expanding current mood tags to more specific and complex emotions and the use of active learning techniques is explored.
Machine learning for music genre: multifaceted review and experimentation with audioset
TLDR
The main goal is to give the reader an overview of the history and the current state-of-the-art, exploring techniques and datasets used to the date, as well as identifying current challenges, such as this ambiguity of genre definitions or the introduction of human-centric approaches.
Retrieval and annotation of music using latent semantic models
TLDR
A joint aspect model is developed that can learn from both tagged and untagged tracks by indexing both conventional words and muswords and is used as the basis of a music search system that supports query by example and by keyword, and of a simple probabilistic machine annotation system.
Music Mood Representations from Social Tags
TLDR
This study demonstrates a particular relevancy of the basic emotions model with four mood clusters that can be sum-marized as: happy, sad, angry and tender.
High-Level Libraries for Emotion Recognition in Music: A Review
TLDR
This article presents a review of high-level libraries that enable to recognize emotions in digital files of music, showing their main functionalities and the most representative attributes in music emotion recognition field (MER) were selected.
Identification of versions of the same musical composition by processing audio descriptions
TLDR
This work proposes a system for version identification that is robust to the main musical changes between versions, including timbre, tempo, key and structure changes, and builds and study a complex network of versions and applies clustering and community detection algorithms.
From Low-Level to High-Level: Comparative Study of Music Similarity Measures
TLDR
This work proposes two distance measures based on tempo-related aspects and a high-level semantic measure based on regression by support vector machines of different groups of musical dimensions such as genre and culture, moods and instruments, or rhythm and tempo.
Unifying Low-Level and High-Level Music Similarity Measures
TLDR
This paper proposes three of such distance measures based on the audio content: first, a low-level measure based on tempo-related description; second, a high-level semantic measurebased on the inference of different musical dimensions by support vector machines; and third, a hybrid measure which combines the above-mentioned distance measures.
...
...

References

SHOWING 1-10 OF 28 REFERENCES
A Demonstrator for Automatic Music Mood Estimation
TLDR
While a subjective evaluation of this algorithm on arbitrary music is ongoing, the initial classification results are encouraging and suggest that an automatic predicition of music mood is possible.
The 2007 MIREX Audio Mood Classification Task: Lessons Learned
TLDR
Important issues in setting up the AMC task are described, dataset construction and ground-truth labeling are analyzed, and human assessments on the audio dataset, as well as system performances from various angles are analyzed.
Automatic mood detection and tracking of music audio signals
TLDR
A hierarchical framework is presented to automate the task of mood detection from acoustic music data, by following some music psychological theories in western cultures, and has the advantage of emphasizing the most suitable features in different detection tasks.
Support vector machine active learning for music retrieval
TLDR
In comparing a number of representations for songs, the statistics of mel-frequency cepstral coefficients to perform best in precision-at-20 comparisons and it is shown that by choosing training examples intelligently, active learning requires half as many labeled examples to achieve the same accuracy as a standard scheme.
Extracting Emotions from Music Data
TLDR
A method for the appropriate objective description of audio files is proposed, and experiments on a set of music pieces are described, and the results are summarized in concluding chapter.
Audio music mood classification using support vector machine
The system submitted to the MIREX Audio Music Mood Classification task is described here. It uses a set of 133 descriptors and a Support Vector Machine classifier to predict the mood cluster. The
Detecting emotion in music
TLDR
Since the preeminent functions of music are social and psychological, the most useful characterization would be based on four types of information: the style, emotion, genre, and similarity.
Mr. Emo: music retrieval in the emotion plane
TLDR
This technical demo presents a novel emotion-based music retrieval platform, called Mr. Emo, for organizing and browsing music collections, which defines emotions by two continuous variables arousal and valence and employs regression algorithms to predict them.
A Regression Approach to Music Emotion Recognition
TLDR
This paper forms MER as a regression problem to predict the arousal and valence values (AV values) of each music sample directly and applies the regression approach to detect the emotion variation within a music selection and find the prediction accuracy superior to existing works.
Music Retrieval: A Tutorial and Review
  • N. Orio
  • Computer Science
    Found. Trends Inf. Retr.
  • 2006
TLDR
An overview of the techniques for music processing, which are commonly exploited in many approaches, is presented and a description of the initial efforts and evaluation campaigns for MIR is provided.
...
...