• Publications
  • Influence
A Regression Approach to Music Emotion Recognition
TLDR
This paper forms MER as a regression problem to predict the arousal and valence values (AV values) of each music sample directly and applies the regression approach to detect the emotion variation within a music selection and find the prediction accuracy superior to existing works. Expand
MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment
TLDR
Three models for symbolic multi-track music generation under the framework of generative adversarial networks (GANs) are proposed, referred to as the jamming model, the composer model and the hybrid model, which can generate coherent music of four bars right from scratch. Expand
MidiNet: A Convolutional Generative Adversarial Network for Symbolic-Domain Music Generation
TLDR
This work proposes a novel conditional mechanism to exploit available prior knowledge, so that the model can generate melodies either from scratch, by following a chord sequence, or by conditioning on the melody of previous bars, making it a generative adversarial network (GAN). Expand
Machine Recognition of Music Emotion: A Review
TLDR
This article provides a comprehensive review of the methods that have been proposed for music emotion recognition and concludes with suggestions for further research. Expand
1000 songs for emotional analysis of music
TLDR
This work presents a new publicly available dataset for music emotion recognition research and a baseline system, consisting entirely of creative commons music from the Free Music Archive, which can be shared freely without penalty. Expand
Ranking-Based Emotion Recognition for Music Organization and Retrieval
TLDR
Experimental results show that this ranking-based approach simplifies emotion annotation and enhances the reliability of the ground truth. Expand
Toward Multi-modal Music Emotion Classification
TLDR
By exploiting both the audio features and the lyrics of a song, the proposed approach improves the 4-class emotion classification accuracy from 46.6% to 57.1% and shows that the incorporation of lyrics significantly enhances the classification accuracy of valence. Expand
Convolutional Generative Adversarial Networks with Binary Neurons for Polyphonic Music Generation
TLDR
Experimental results show that using binary neurons instead of HT or BS indeed leads to better results in a number of objective measures, and deterministic binary neurons perform better than stochastic ones in both objective measures and a subjective test. Expand
Music Emotion Recognition
Providing acomplete review of existing work in music emotion developed in psychology and engineering, Music Emotion Recognition explains how to account for the subjective nature of emotion perceptionExpand
Smooth Control of Adaptive Media Playout for Video Streaming
TLDR
This paper proposes a novel AMP scheme to keep the video playout as smooth as possible while adapting to the channel condition and shows that the scheme surpasses conventional schemes in unfriendly network conditions. Expand
...
1
2
3
4
5
...