Creating synthetic voices for children by adapting adult average voice using stacked transformations and VTLN

Abstract

This paper describes experiments in creating personalised children's voices for HMM-based synthesis by adapting either an adult or child average voice. The adult average voice is trained from a large adult speech database, whereas the child average voice is trained using a small database of children's speech. Here we present the idea to use stacked transformations for creating synthetic child voices, where the child average voice is first created from the adult average voice through speaker adaptation using all the pooled speech data from multiple children and then adding child specific speaker adaptation on top of it. VTLN is applied to speech synthesis to see whether it helps the speaker adaptation when only a small amount of adaptation data is available. The listening test results show that the stacked transformations significantly improve speaker adaptation for small amounts of data, but the additional benefit provided by VTLN is not yet clear.

DOI: 10.1109/ICASSP.2012.6288918

Extracted Key Phrases

5 Figures and Tables

Cite this paper

@article{Karhila2012CreatingSV, title={Creating synthetic voices for children by adapting adult average voice using stacked transformations and VTLN}, author={Reima Karhila and Doddipatla Rama Sanand and Mikko Kurimo and Peter Smit}, journal={2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, year={2012}, pages={4501-4504} }