Speech enhancement based on Deep Neural Networks with skip connections

Abstract

Speech enhancement under noise condition has always been an intriguing research topic. In this paper, we propose a new Deep Neural Networks (DNNs) based architecture for speech enhancement. In contrast to standard feed forward network architecture, we add skip connections between network inputs and outputs to indirectly force the DNNs to learn ideal ratio mask. We also show that the performance can be further improved by stacking multiple such network blocks. Experimental results demonstrate that our proposed architecture can achieve considerably better performance than the existing method in terms of three commonly used objective measurements under two real noise conditions.

DOI: 10.1109/ICASSP.2017.7953221

5 Figures and Tables

Showing 1-10 of 17 references

Tensorflow: Large-scale machine learning on heterogeneous systems, 2015

  • Martın Abadi, Ashish Agarwal, +7 authors Matthieu Devin
  • 2015
1 Excerpt