Speech enhancement based on Deep Neural Networks with skip connections

Abstract

Speech enhancement under noise condition has always been an intriguing research topic. In this paper, we propose a new Deep Neural Networks (DNNs) based architecture for speech enhancement. In contrast to standard feed forward network architecture, we add skip connections between network inputs and outputs to indirectly force the DNNs to learn ideal ratio mask. We also show that the performance can be further improved by stacking multiple such network blocks. Experimental results demonstrate that our proposed architecture can achieve considerably better performance than the existing method in terms of three commonly used objective measurements under two real noise conditions.

DOI: 10.1109/ICASSP.2017.7953221

5 Figures and Tables

Cite this paper

@article{Tu2017SpeechEB, title={Speech enhancement based on Deep Neural Networks with skip connections}, author={Ming Tu and Xianxian Zhang}, journal={2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, year={2017}, pages={5565-5569} }