Classifying soundtracks with audio texture features

Abstract

Sound textures may be defined as sounds whose character depends on statistical properties as much as the specific details of each individually-perceived event. Recent work has devised a set of statistics that, when synthetically imposed, allow listeners to identify a wide range of environmental sound textures. In this work, we investigate using these statistics for automatic classification of a set of environmental sound classes defined over a set of web videos depicting “multimedia events”. We show that the texture statistics perform as well as our best conventional statistics (based on MFCC covariance). We further examine the relative contributions of the different statistics, showing the importance of modulation spectra and cross-band envelope correlations.

DOI: 10.1109/ICASSP.2011.5947699

Extracted Key Phrases

5 Figures and Tables

Cite this paper

@article{Ellis2011ClassifyingSW, title={Classifying soundtracks with audio texture features}, author={Daniel P. W. Ellis and Xiaohong Zeng and Josh H. McDermott}, journal={2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, year={2011}, pages={5880-5883} }