We study the importance of a melodic audio (MA) feature set in music emotion recognition (MER) and compare its performance to an approach using only standard audio (SA) features. We also analyse the fusion of both types of features. Employing only SA features, the best attained performance was 46.3%, while using only MA features the best outcome was 59.1%… (More)
In this work, three audio frameworks – Marsyas, MIR Toolbox and PsySound3, were used to extract audio features from the audio samples. These features are then used to train several classification models, resulting in the different versions submitted to MIREX 2012 mood classification task.
Large digital databases of Hindi music are available which creates an opportunity of filtering this data with multiple parameters. One of the most important parameter used by the listeners are their moods. This paper focuses on Automatic generation of mood based playlist for Hindi popular music with minimum user intervention. There are two major modules of… (More)
NOTE: this is a draft, will be updated once data is out. Our work towards music emotion recognition in MIREX 2013 stems from our best strategy from 2012 and the addition of new melodic audio features, subject of study during the current year. Three audio frameworks – Marsyas, MIR Toolbox and PsySound3, are used to extract the commonly used audio features… (More)
We present a study on music emotion recognition from lyrics. We start from a dataset of 764 samples (audio+lyrics) and perform feature extraction using several natural language processing techniques. Our goal is to build classifiers for the different datasets, comparing different algorithms and using feature selection. The best results (44.2% F-measure)… (More)