Lukthung Classification Using Neural Networks on Lyrics and Audios

  title={Lukthung Classification Using Neural Networks on Lyrics and Audios},
  author={Kawisorn Kamtue and Kasina Euchukanonchai and Dittaya Wanvarie and Naruemon Pratanwanich},
  journal={2019 23rd International Computer Science and Engineering Conference (ICSEC)},
Music genre classification is a widely researched topic in music information retrieval (MIR). Being able to automatically tag genres will benefit music streaming service providers such as JOOX, Apple Music, and Spotify for their content-based recommendation. However, most studies on music classification have been done on western songs which differ from Thai songs. Lukthung, a distinctive and long-established type of Thai music, is one of the most popular music genres in Thailand and has a… Expand


Lyrics-Based Music Genre Classification Using a Hierarchical Attention Network
This study applies recurrent neural network models to classify a large dataset of intact song lyrics and adapts a hierarchical attention network (HAN) to exploit these layers and learn the importance of the words, lines, and segments. Expand
Turkish Music Genre Classification using Audio and Lyrics Features
Experimental results show that textual features can also be effective as well as audio features for Turkish MGC, especially when a supervised term weighting method is employed. Expand
Automatic Musical Pattern Feature Extraction Using Convolutional Neural Network
This work proposes a novel approach to extract musical pattern features in audio music using convolutional neural network (CNN), a model widely adopted in image information retrieval tasks. Expand
Transfer Learning by Supervised Pre-training for Audio-based Music Classification
It is shown that features learned from MSD audio fragments in a supervised manner, using tag labels and user listening data, consistently outperform features learned in an unsupervised manner in this setting, provided that the learned feature extractor is of limited complexity. Expand
Music Genre Classification by Ensembles of Audio and Lyrics Features
Advancing over previous work that showed improvements with simple feature fusion, the more sophisticated approach of result (or late) fusion is applied, achieving results superior to the best choice of a single algorithm on a single feature set. Expand
A Deep Bag-of-Features Model for Music Auto-Tagging
This paper presents a two-stage learning model to effectively predict multiple labels from music audio, and achieves high performance on Magnatagatune, a popularly used dataset in music auto-tagging. Expand
Semi-supervised learning for music artists style identification
This paper addresses the issue of identifying the artist style of singer-songwriters by taking a semi-supervised learning approach, in which a classification algorithm is trained for each feature set but the target label is adjusted for input data so as to minimized disagreement between the classifiers. Expand
Music genre recognition using spectrograms
An alternative approach for music genre classification which converts the audio signal into spectrograms and then extracts features from this visual representation and demonstrates that the classifier trained with texture compares similarly to the literature. Expand
Musical genre classification of audio signals
The automatic classification of audio signals into an hierarchy of musical genres is explored and three feature sets for representing timbral texture, rhythmic content and pitch content are proposed. Expand
End-to-end learning for music audio
  • S. Dieleman, B. Schrauwen
  • Computer Science
  • 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2014
Although convolutional neural networks do not outperform a spectrogram-based approach, the networks are able to autonomously discover frequency decompositions from raw audio, as well as phase-and translation-invariant feature representations. Expand