An End-to-End Neural Network for Polyphonic Music Transcription


We present a supervised neural network model for polyphonic piano music transcription. The architecture of the proposed model is analogous to speech recognition systems and comprises an acoustic model and a music language model. The acoustic model is a neural network used for estimating the probabilities of pitches in a frame of audio. The language model is a recurrent neural network that models the correlations between pitch combinations over time. The proposed model is general and can be used to transcribe polyphonic music without imposing any constraints on the polyphony. The acoustic and language model predictions are combined using a probabilistic graphical model. Inference over the output variables is performed using the beam search algorithm. We perform two sets of experiments. We investigate various neural network architectures for the acoustic models and also investigate the effect of combining acoustic and music language model predictions using the proposed architecture. We compare performance of the neural network based acoustic models with two popular unsupervised acoustic models. Results show that convolutional neural network acoustic models yields the best performance across all evaluation metrics. We also observe improved performance with the application of the music language models. Finally, we present an efficient variant of beam search that improves performance and reduces runtimes by an order of magnitude, making the model suitable for real-time applications.

Extracted Key Phrases

8 Figures and Tables

Cite this paper

@article{Sigtia2015AnEN, title={An End-to-End Neural Network for Polyphonic Music Transcription}, author={Siddharth Sigtia and Emmanouil Benetos and Simon Dixon}, journal={CoRR}, year={2015}, volume={abs/1508.01774} }