Raw Waveform-based Speech Enhancement by Fully Convolutional Networks

Abstract

This study proposes a fully convolutional network (FCN) model for raw waveform-based speech enhancement. The proposed system performs speech enhancement in an end-to-end (i.e., waveform-in and waveform-out) manner, which differs from most existing denoising methods that process the magnitude spectrum (e.g., log power spectrum (LPS)) only. Because the fully connected layers, which are involved in deep neural networks (DNN) and convolutional neural net-works (CNN), may not accurately characterize the local in-formation of speech signals, particularly with high frequency components, we employed fully convolutional layers to model the waveform. More specifically, FCN consists of only convolutional layers and thus the local temporal structures of speech signals can be efficiently and effectively preserved with relatively few weights. Experimental results show that DNNand CNN-based models have limited capability to restore high frequency components of waveforms, thus leading to decreased intelligibility of enhanced speech. By contrast, the proposed FCN model can not only effectively recover the waveforms but also outperform the LPSbased DNN baseline in terms of short-time objective intelligibility (STOI) and perceptual evaluation of speech quality (PESQ). In addition, the number of model parameters in FCN is approximately only 0.2% com-pared with that in both DNN and CNN.

7 Figures and Tables

Cite this paper

@article{Fu2017RawWS, title={Raw Waveform-based Speech Enhancement by Fully Convolutional Networks}, author={Szu-Wei Fu and Yu Tsao and Xugang Lu and Hisashi Kawai}, journal={CoRR}, year={2017}, volume={abs/1703.02205} }