Automatic large-scale classification of bird sounds is strongly improved by unsupervised feature learning

Abstract

Automatic species classification of birds from their sound is a computational tool of increasing importance in ecology, conservation monitoring and vocal communication studies. To make classification useful in practice, it is crucial to improve its accuracy while ensuring that it can run at big data scales. Many approaches use acoustic measures based on spectrogram-type data, such as the Mel-frequency cepstral coefficient (MFCC) features which represent a manually-designed summary of spectral information. However, recent work in machine learning has demonstrated that features learnt automatically from data can often outperform manually-designed feature transforms. Feature learning can be performed at large scale and "unsupervised", meaning it requires no manual data labelling, yet it can improve performance on "supervised" tasks such as classification. In this work we introduce a technique for feature learning from large volumes of bird sound recordings, inspired by techniques that have proven useful in other domains. We experimentally compare twelve different feature representations derived from the Mel spectrum (of which six use this technique), using four large and diverse databases of bird vocalisations, classified using a random forest classifier. We demonstrate that in our classification tasks, MFCCs can often lead to worse performance than the raw Mel spectral data from which they are derived. Conversely, we demonstrate that unsupervised feature learning provides a substantial boost over MFCCs and Mel spectra without adding computational complexity after the model has been trained. The boost is particularly notable for single-label classification tasks at large scale. The spectro-temporal activations learned through our procedure resemble spectro-temporal receptive fields calculated from avian primary auditory forebrain. However, for one of our datasets, which contains substantial audio data but few annotations, increased performance is not discernible. We study the interaction between dataset characteristics and choice of feature representation through further empirical analysis.

DOI: 10.7717/peerj.488

Extracted Key Phrases

6 Figures and Tables

Showing 1-10 of 44 references

Neural information processing scaled for bioacoustics, from neurons to big data

  • H Glotin, Y Lecun, T Artì Eres, S Mallat, O Tchernichovski, X Halkias
  • 2013
Highly Influential
3 Excerpts

PeerJ, DOI 10.7717/peerj

  • Plumbley Stowell
  • 2014

lme4: linear mixed-effects models using Eigen and S4, R package version 1

  • D Bates, M Maechler, B Bolker, S Walker
  • 2014
1 Excerpt
Showing 1-10 of 38 extracted citations
01020302014201520162017
Citations per Year

fewer than 50 Citations

Semantic Scholar estimates that this publication has received between 38 and 79 citations based on the available data.

See our FAQ for additional information.