Fast vs Slow: Learning Tempo Octaves from User Data

Abstract

The widespread use of beatand tempo-tracking methods in music information retrieval tasks has been marginalized due to undesirable sporadic results from these algorithms. While sensorimotor and listening studies have demonstrated the subjectivity and variability inherent to human performance of this task, MIR applications such as recommendation require more reliable output than available from present tempo estimation models. In this paper, we present a initial investigation of tempo assessment based on the simple classification of whether the music is fast or slow. Through three experiments, we provide performance results of our method across two datasets, and demonstrate its usefulness in the pursuit of a reliable global tempo estimation.

Extracted Key Phrases

5 Figures and Tables

Cite this paper

@inproceedings{Hockman2010FastVS, title={Fast vs Slow: Learning Tempo Octaves from User Data}, author={Jason Hockman and Ichiro Fujinaga}, booktitle={ISMIR}, year={2010} }