Creating an A Cappella Singing Audio Dataset for Automatic Jingju Singing Evaluation Research

@article{Gong2017CreatingAA,
  title={Creating an A Cappella Singing Audio Dataset for Automatic Jingju Singing Evaluation Research},
  author={Rong Gong and Rafael Caro Repetto and Xavier Serra},
  journal={Proceedings of the 4th International Workshop on Digital Libraries for Musicology},
  year={2017}
}
The data-driven computational research on automatic jingju (also known as Beijing or Peking opera) singing evaluation lacks a suitable and comprehensive a cappella singing audio dataset. In this work, we present an a cappella singing audio dataset which consists of 120 arias, accounting for 1265 melodic lines. This dataset is also an extension our existing CompMusic jingju corpus. Both professional and amateur singers were invited to the dataset recording sessions, and the most common jingju… 

Figures and Tables from this paper

Erkomaishvili Dataset: A Curated Corpus of Traditional Georgian Vocal Music for Computational Musicology
TLDR
A curated dataset of traditional Georgian vocal music for computational musicology based on historic tape recordings of three-voice Georgian songs performed by the former master chanter Artem Erkomaishvili is presented.
The Tarteel Dataset: Crowd-Sourced and Labeled Quranic Recitation
TLDR
The Tarteel recitation dataset is described, the first large-scale dataset of Quranic recitation and accompanying Arabic text collected in a crowd-sourced manner, and a standard schema for paired Quranic audio and text datasets is proposed.
On-Line Audio-to-Lyrics Alignment Based on a Reference Performance
TLDR
This work describes the first real-timecapable audio-to-lyrics alignment pipeline that is able to robustly track the lyrics of different languages, without additional language information.

References

SHOWING 1-10 OF 11 REFERENCES
Score-informed syllable segmentation for Jingju a cappella singing voice with Mel-frequency intensity profiles
This paper introduces a new unsupervised and score-informed method for the segmentation of singing voice into syllables. The main idea of the proposed method is to detect the syllable onset on a
Score-Informed Syllable Segmentation for A Cappella Singing Voice with Convolutional Neural Networks
TLDR
This paper introduces a new score-informed method for the segmentation of jingju a cappella singing phrase into syllables that outperforms the state-of-the-art in syllable segmentation for jing Ju a cappa singing.
Towards Music Structural Segmentation across Genres
TLDR
Results show that different features capture the structural patterns of different music genres in different ways and indicate that the design of audio features and segmentation algorithms as well as the consideration of contextual information related to the music corpora should be accounted dependently in an effective segmentation system.
A Collection of Music Scores for Corpus Based Jingju Singing Research
Comunicacio presentada a: ISMIR 2017, celebrat a Suzhou, Xina, del 23 al 27 d'octubre de 2017
Creating a Corpus of Jingju (Beijing Opera) Music and Possibilities for Melodic Analysis
Comunicacio presentada a la 15th International Society for Music Information Retrieval Conference (ISMIR 2014), celebrada els dies 27 a 31 d'octubre de 2014 a Taipei, Taiwan.
Audio to Score Matching by Combining Phonetic and Duration Information
TLDR
This work argues that, due to the existence of a basic melodic contour for each mode in jingju music, only using melodic information will result in an ambiguous matching, and proposes a matching approach based on the use of phonetic and duration information.
Listening to Theatre: The Aural Dimension of Beijing Opera
Behind its elaborate costumes and make-up, its pantomime and acrobatics, Beijing opera is above all a world created in sound - so much so that attending a performance in China is referred to as
Pitch contour segmentation for computer-aided jinju singing training
Comunicacio presentada a: 13th Sound and Music Computing Conference (SMC 2016), celebrat a Hamburg (Alemanya), del 31 d'agost a 3 de setembre de 2016.
Creating Research Corpora for the Computational Study of Music: the case of the CompMusic Project
Comunicacio presentada a la 53rd International Conference: Semantic audio, celebrada els dies 27 a 29 de gener de 2014 a Londres, Regne Unit.
Monoaural Audio Source Separation Using Deep Convolutional Neural Networks
TLDR
A low-latency monaural source separation framework using a Convolutional Neural Network and the performance of the neural network is evaluated on a database comprising of musical mixtures of three instruments as well as other instruments which vary from song to song.
...
1
2
...