DWT and LPC based feature extraction methods for isolated word recognition

Abstract

In this article, new feature extraction methods, which utilize wavelet decomposition and reduced order linear predictive coding (LPC) coefficients, have been proposed for speech recognition. The coefficients have been derived from the speech frames decomposed using discrete wavelet transform. LPC coefficients derived from subband decomposition (abbreviated as WLPC) of speech frame provide better representation than modeling the frame directly. The WLPC coefficients have been further normalized in cepstrum domain to get new set of features denoted as wavelet subband cepstral mean normalized features. The proposed approaches provide effective (better recognition rate), efficient (reduced feature vector dimension), and noise robust features. The performance of these techniques have been evaluated on the TI-46 isolated word database and own created Marathi digits database in a white noise environment using the continuous density hidden Markov model. The experimental results also show the superiority of the proposed techniques over the conventional methods like linear predictive cepstral coefficients, Mel-frequency cepstral coefficients, spectral subtraction, and cepstral mean normalization in presence of additive white Gaussian noise.

DOI: 10.1186/1687-4722-2012-7

Extracted Key Phrases

9 Figures and Tables

Cite this paper

@article{Nehe2012DWTAL, title={DWT and LPC based feature extraction methods for isolated word recognition}, author={Navnath S. Nehe and Raghunath S. Holambe}, journal={EURASIP J. Audio, Speech and Music Processing}, year={2012}, volume={2012}, pages={7} }