Applying a Speaker-Dependent Speech Compression Technique to Concatenative TTS Synthesizers

Abstract

This paper proposes a new speaker-dependent coding algorithm to efficiently compress a large speech database for corpus-based concatenative text-to-speech (TTS) engines while maintaining high fidelity. To achieve a high compression ratio and meet the fundamental requirements of concatenative TTS synthesizers, such as partial segment decoding and random access capability, we adopt a nonpredictive analysis-by-synthesis scheme for speaker-dependent parameter estimation and quantization. The spectral coefficients are quantized by using a memoryless split vector quantization (VQ) approach that does not use frame correlation. Considering that excitation signals of a specific speaker show low intra-variation especially in the voiced regions, the conventional adaptive codebook for pitch prediction is replaced by a speaker-dependent pitch-pulse codebook trained by a corpus of single-speaker speech signals. To further improve the coding efficiency, the proposed coder flexibly combines nonpredictive and predictive type method considering the structure of the TTS system. By applying the proposed algorithm to a Korean TTS system, we could obtain comparable quality to the G.729 speech coder and satisfy all the requirements that TTS system needs. The results are verified by both objective and subjective quality measurements. In addition, the decoding complexity of the proposed coder is around 55% lower than that of G.729 annex A

DOI: 10.1109/TASL.2006.876762

12 Figures and Tables

Cite this paper

@article{Lee2007ApplyingAS, title={Applying a Speaker-Dependent Speech Compression Technique to Concatenative TTS Synthesizers}, author={Chin-Hui Lee and S.-K. Jung and H.-G. Kang}, journal={IEEE Transactions on Audio, Speech, and Language Processing}, year={2007}, volume={15}, pages={632-640} }