Novel Variations of Group Sparse Regularization Techniques With Applications to Noise Robust Automatic Speech Recognition

Abstract

This paper presents novel variations of group sparse regularization techniques. We expand upon the Sparse Group LASSO formulation to incorporate different learning techniques for better sparsity enforcement within a group and demonstrate the effectiveness of the algorithms for spectral denoising with applications to robust Automatic Speech Recognition (ASR). In particular, we show that with a strategic selection of groupings greater robustness to noisy speech recognition can be achieved when compared to state-of-the-art techniques like the Fast Iterative Shrinkage Thresholding Algorithm (FISTA) implementation of the Sparse Group LASSO. Moreover, we demonstrate that group sparse regularization techniques can offer significant gains over efficient techniques like the Elastic Net. We also show that the proposed algorithms are effective in exploiting collinear dictionaries to deal with the inherent highly coherent nature of speech spectral segments. Experiments on the Aurora 2.0 continuous digit database and the Aurora 3.0 realistic noisy database demonstrate the performance improvement with the proposed methods, including showing that their execution time is comparable to FISTA, making our algorithms practical for application to a wide range of regularization problems.

DOI: 10.1109/TASL.2011.2178596

Extracted Key Phrases

Cite this paper

@article{Tan2012NovelVO, title={Novel Variations of Group Sparse Regularization Techniques With Applications to Noise Robust Automatic Speech Recognition}, author={Qun Feng Tan and Shrikanth S. Narayanan}, journal={IEEE Transactions on Audio, Speech, and Language Processing}, year={2012}, volume={20}, pages={1337-1346} }