• Corpus ID: 246015975

A Novel Multi-Task Learning Method for Symbolic Music Emotion Recognition

@inproceedings{Qiu2022ANM,
  title={A Novel Multi-Task Learning Method for Symbolic Music Emotion Recognition},
  author={Jibao Qiu and C. L. Philip Chen and Tong Zhang},
  year={2022}
}
Symbolic Music Emotion Recognition(SMER) is to predict music emotion from symbolic data, such as MIDI and MusicXML. Previous work mainly focused on learning better representation via (mask) language model pre-training but ignored the intrinsic structure of the music, which is extremely important to the emotional expression of music. In this paper, we present a simple multi-task framework for SMER, which incorporates the emotion recognition task with other emotion-related auxiliary tasks derived… 
Evaluation of the Emotion Model in Electronic Music Based on PSO-BP
  • Ting Guo
  • Computer Science
    Computational intelligence and neuroscience
  • 2022
TLDR
The Electronic music emotion analysis model based on PSO-BP neural network can reduce the error rate of electronic music lyrics text emotion classification and identify and analyze electronic music emotion with high accuracy, which is closer to the actual results and meets the expected requirements.

References

SHOWING 1-10 OF 42 REFERENCES
EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation
TLDR
The EMOPIA dataset is presented, a shared multi-modal (audio and MIDI) database focusing on perceived emotion in pop piano music, to facilitate research on various tasks related to music emotion.
Multi-Modal Music Emotion Recognition: A New Dataset, Methodology and Comparative Analysis
TLDR
A methodology for the automatic creation of a multi-modal music emotion dataset resorting to the AllMusic database, based on the emotion tags used in the MIREX Mood Classification Task is introduced.
Exploration of Music Emotion Recognition Based on MIDI
TLDR
It is found that melody was more important to valence regression than accompaniment, which was in contrary to arousal, and the chorus part of an edited MIDI might contain as sufficient information as the entire edited MIDI forValence regression.
Novel Audio Features for Music Emotion Recognition
TLDR
This work advances the music emotion recognition state-of-the-art by proposing novel emotionally-relevant audio features related with musical texture and expressive techniques, and analysing the features relevance and results uncovered interesting relations.
Audio-based deep music emotion recognition
TLDR
A strategy to recognize the emotion contained in songs by classifying their spectrograms, which contain both the time and frequency information, with a convolutional neural network (CNN).
Audio Features for Music Emotion Recognition: a Survey
TLDR
Although the focus of this article is on classical feature engineering methodologies (based on handcrafted features), perspectives on deep learning-based approaches are discussed and strategies for future research on feature engineering for MER are proposed.
MusicBERT: Symbolic Music Understanding with Large-Scale Pre-Training
TLDR
This paper develops MusicBERT, a large-scale pre-trained model for music understanding that contains more than 1 million music songs and designs several mechanisms, including OctupleMIDI encoding and barlevel masking strategy, to enhance pre-training with symbolic music data.
Selection of Audio Features for Music Emotion Recognition Using Production Music
TLDR
The results show that 32 spectral, harmonic, rhythmic and temporal features are needed for optimum performance, but as the error converges quickly, good performance can be achieved with much fewer.
MidiBERT-Piano: Large-scale Pre-training for Symbolic Music Understanding
TLDR
An attempt to employ the mask language modeling approach of BERT to pre-train a 12-layer Transformer model for tackling a number of symbolic-domain discriminative music understanding tasks, finding that, given a pretrained Transformer, the models outperform recurrent neural network based baselines with less than 10 epochs of fine-tuning.
Learning to Generate Music With Sentiment
TLDR
A generative Deep Learning model that can be directed to compose music with a given sentiment that is able to obtain good prediction accuracy and can be used for sentiment analysis of symbolic music.
...
...