Expressive speech synthesis in MARY TTS using audiobook data and emotionML

Abstract

This paper describes a framework for synthesis of expressive speech based on MARY TTS and Emotion Markup Language (EmotionML). We describe the creation of expressive unit selection and HMM-based voices using audiobook data labelled according to voice styles. Audiobook data is labelled/split according to voice styles by principal component analysis (PCA) of acoustic features extracted from segmented sentences. We introduce the implementation of EmotionML in MARY TTS and explain how it is used to represent and control expressivity in terms of discrete emotions or emotion dimensions. Preliminary results on perception of different voice styles are presented.

3 Figures and Tables

Cite this paper

@inproceedings{Charfuelan2013ExpressiveSS, title={Expressive speech synthesis in MARY TTS using audiobook data and emotionML}, author={Marcela Charfuelan and Ingmar Steiner}, booktitle={INTERSPEECH}, year={2013} }