• Corpus ID: 615781

Lyrics-Based Music Genre Classification Using a Hierarchical Attention Network

  title={Lyrics-Based Music Genre Classification Using a Hierarchical Attention Network},
  author={Alexandros Tsaptsinos},
  booktitle={International Society for Music Information Retrieval Conference},
Music genre classification, especially using lyrics alone, remains a challenging topic in Music Information Retrieval. [] Key Method As lyrics exhibit a hierarchical layer structure - in which words combine to form lines, lines form segments, and segments form a complete song - we adapt a hierarchical attention network (HAN) to exploit these layers and in addition learn the importance of the words, lines, and segments. We test the model over a 117-genre dataset and a reduced 20-genre dataset. Experimental…

Figures and Tables from this paper

Brazilian Lyrics-Based Music Genre Classification Using a BLSTM Network

A novel approach for automatic classifying musical genre in Brazilian music using only the song lyrics is presented and it is shown that the BLSTM method outperforms the other models with an F1-score average of $0.48.

Lukthung Classification Using Neural Networks on Lyrics and Audios

This paper develops neural networks to classify Lukthung genre from others using both lyrics and audios using a convolutional neural network (CNN) architecture and shows that the proposed three models outperform all of the standard classifiers.

On Combining Diverse Models for Lyrics-Based Music Genre Classification

Different strategies for music genre classification from lyrics are explored and it is shown that even simple combinations of these strategies allow improving accuracy on the lyrics-based music genre identification.

On Combining Diverse Models for Lyrics-Based Music Genre Classification

Different strategies for music genre classification from lyrics are explored and it is shown that even simple combinations of these strategies allow improving accuracy on the lyrics-based music genre identification.

Music Genre Classification using Song Lyrics

This project used GloVe embeddings in two logistic regression models to classify songs into genres using their lyrics and trained an L STM model and bidirectional LSTM model, achieving an accuracy of 68%.

Comparing Lyrics Features for Genre Recognition

The results show that textual features produce accuracy scores comparable to audio features, and it is seen that audio and textual features complement each other well, with models trained using both types of features producing the best accuracy scores.

A Novel Multimodal Music Genre Classifier using Hierarchical Attention and Convolutional Neural Network

This work implemented a CNN based feature extractor for spectrograms in order to incorporate the acoustic features and a Hierarchical Attention Network based feature Extractor for lyrics to classify the music track based upon the resulting fused feature vector.

A general framework for learning prosodic-enhanced representation of rap lyrics

A hierarchical attention variational a utoe ncoder framework (HAVAE), which simultaneously considers semantic and prosodic features for rap lyrics representation learning and outperforms the state-of-the-art approaches under various metrics in different rap lyrics learning tasks.

Exploiting Heterogeneous Artist and Listener Preference Graph for Music Genre Classification

A novel graph-based neural network is proposed to automatically encode the global preference relations of the heterogeneous graph into artist and listener representations, and a graph convolutional network is applied to learn genre representation from the correlation graph.

Connecting the Last.fm Dataset to LyricWiki and MusicBrainz. Lyrics-based experiments in genre classification

The construction of an English lyrics dataset based on the Last.fm Dataset, connected to LyricWiki's database and MusicBrainz’s encyclopedia is described, showing that more sophisticated textual features can improve genre classification performance and indicating the superiority of the binary weighting scheme compared to tf–idf.



Music Genre Classification by Ensembles of Audio and Lyrics Features

Advancing over previous work that showed improvements with simple feature fusion, the more sophisticated approach of result (or late) fusion is applied, achieving results superior to the best choice of a single algorithm on a single feature set.

Timbral modeling for music artist recognition using i-vectors

New song-level timbre-related features that are built from frame-level MFCCs via so-called i-vectors are proposed that yields considerable improvements and outperforms existing methods.

Improving mood classification in music digital libraries by combining lyrics and audio

The results show that combining lyrics and audio significantly outperformed systems using audio-only features and that the hybrid lyric + audio system needed fewer training samples to achieve the same or better classification accuracies than systems using lyrics or audio singularly.

Automatic Musical Pattern Feature Extraction Using Convolutional Neural Network

This work proposes a novel approach to extract musical pattern features in audio music using convolutional neural network (CNN), a model widely adopted in image information retrieval tasks.

Genre and mood classification using lyric features

Experiments show that classification accuracies for mood categories outperform genres and the Part-of-Speech feature is utilizes for classification of a collection of 600 songs.

Evaluating the Genre Classification Performance of Lyrical Features Relative to Audio, Symbolic and Cultural Features

It was found that cultural features consisting of information extracted from both web searches and mined listener tags were particularly effective, with the result that classification accuracies were achieved that compare favorably with the current state of the art of musical genre classification.

A new hierarchical method for music genre classification

  • Wei DuHu LinJianwei SunBo YuH. Yang
  • Computer Science
    2016 9th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)
  • 2016
A new music genre classification method which utilizes hierarchical analysis of the spectrograms features extracted from the audio signals is presented, and the results show that this model can get comparable results compared with some other existingMusic genre classification methods.

Combination of audio and lyrics features for genre classification in digital audio collections

Findings from investigating advanced lyrics features, such as the frequency of certain rhyme patterns, several parts-of-speech features, and statistic features such as words per minute (WPM), are presented.

Lyrics-based Analysis and Classification of Music

We present a novel approach for analysing and classifying lyrics, experimenting both with ngram models and more sophisticated features that model different dimensions of a song text, such as

Melody Extraction on Vocal Segments Using Multi-Column Deep Neural Networks

A classification-based approach for melody extraction on vocal segments using multi-column deep neural networks trained to predict a pitch label of singing voice from spectrogram, but their outputs have different pitch resolutions.