Feature Extraction Based on Pitch-Synchronous Averaging for Robust Speech Recognition

Abstract

In this paper, we propose two estimators for the autocorrelation sequence of a periodic signal in additive noise. Both estimators are formulated employing tables which contain all the possible products of sample pairs in a speech signal frame. The first estimator is based on a pitch-synchronous averaging. This estimator is statistically analyzed and we show that the signal-to-noise ratio (SNR) can be increased up to a factor equal to the number of available periods. The second estimator is similar to the former one but it avoids the use of those sample products more likely affected by noise. We prove that, under certain conditions, this estimator can remove the effect of an additive noise in a statistical sense. Both estimators are employed to extract mel frequency cepstral coefficients (MFCCs) as features for robust speech recognition. Although these estimators are initially conceived for voiced speech frames, we extend their application to unvoiced sounds in order to obtain a coherent feature extractor. The experimental results show the superiority of the proposed approach over other MFCC-based front-ends such as the higher-lag autocorrelation spectrum estimation (HASE), which also employs the idea of avoiding those autocorrelation coefficients more likely affected by noise.

DOI: 10.1109/TASL.2010.2053846

Extracted Key Phrases

10 Figures and Tables

Cite this paper

@article{MoralesCordovilla2011FeatureEB, title={Feature Extraction Based on Pitch-Synchronous Averaging for Robust Speech Recognition}, author={Juan Andres Morales-Cordovilla and Antonio M. Peinado and Victoria E. S{\'a}nchez and Jos{\'e} A. Gonz{\'a}lez}, journal={IEEE Transactions on Audio, Speech, and Language Processing}, year={2011}, volume={19}, pages={640-651} }