An improved feature extraction and combination of multiple classifiers for query-by-humming

Abstract

In this paper, we propose new methods for feature extraction and soft majority voting to adjust efficiency and accuracy of music retrieval. For our work, the input is humming sound which is sound wave and Musical Instrument Digital Interface (MIDI) is used as the reference song in database. A critical issue of humming sound are variation such as duration, sound, tempo, key, and noise interference from both environment and acquisition instruments. Besides all the problems of humming sound we have mentioned earlier, whether humming sound and MIDI in different domain which will make the difficulty for two domains to compare each other. However, to make these two in the same domain, we convert them into the frequency domain. Our approach starts from pre-processing by using features for note segmentation by humming sound. The process consists of four steps as follows: Firstly, the MIDI is already a sequence of pitch while the pitch in humming sound is needed to extract by Subharmonic-to-Harmonic (SHR). Subsequently, the extracted pitch can be used to calculate all above attributes and then multiple classifiers are applied to classify the multiple subsets of these features. Afterwards, the subset contain the multiple attributes, Multi-Dimensional Dynamic Time Warping (MD-DTW) is used for similarity measurement. Finally, Nearest Neighbours (NN) and soft majority voting are used to obtain the retrieval results in case of equal scores. From the experiments, to achieve 100% accuracy rate at the early top-n rank in retrieving, the appropriate feature set should consist of five classifiers.

Extracted Key Phrases

14 Figures and Tables

Cite this paper

@article{Phiwma2014AnIF, title={An improved feature extraction and combination of multiple classifiers for query-by-humming}, author={Nattha Phiwma and Parinya Sanguansat}, journal={Int. Arab J. Inf. Technol.}, year={2014}, volume={11}, pages={103-110} }