An End-to-End Neural Network for Polyphonic Piano Music Transcription


We present a supervised neural network model for polyphonic piano music transcription. The architecture of the proposed model is analogous to speech recognition systems and comprises an <i>acoustic model</i> and a <i>music language model</i>. The acoustic model is a neural network used for estimating the probabilities of pitches in a frame of audio. The language model is a recurrent neural network that models the correlations between pitch combinations over time. The proposed model is general and can be used to transcribe polyphonic music without imposing any constraints on the polyphony. The acoustic and language model predictions are combined using a probabilistic graphical model. Inference over the output variables is performed using the beam search algorithm. We perform two sets of experiments. We investigate various neural network architectures for the acoustic models and also investigate the effect of combining acoustic and music language model predictions using the proposed architecture. We compare performance of the neural network-based acoustic models with two popular unsupervised acoustic models. Results show that convolutional neural network acoustic models yield the best performance across all evaluation metrics. We also observe improved performance with the application of the music language models. Finally, we present an efficient variant of beam search that improves performance and reduces run-times by an order of magnitude, making the model suitable for real-time applications.

DOI: 10.1109/TASLP.2016.2533858

7 Figures and Tables

Citations per Year

Citation Velocity: 23

Averaging 23 citations per year over the last 2 years.

Learn more about how we calculate this metric in our FAQ.

Cite this paper

@article{Sigtia2016AnEN, title={An End-to-End Neural Network for Polyphonic Piano Music Transcription}, author={Siddharth Sigtia and Emmanouil Benetos and Simon Dixon}, journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing}, year={2016}, volume={24}, pages={927-939} }