Learn More
This paper proposes three corpora of emotional speech in Japanese that maximize the expression of each emotion (expressing joy, anger, and sadness) for use with CHATR, the concatenative speech synthesis system being developed at ATR. A perceptual experiment was conducted using the synthesized speech generated from each emotion corpus and the results proved(More)
The authors have proposed a method of generating synthetic speech with emotion by creating three copora of emotional speech for use with CHATR [1][2], the concatenative speech synthesis system developed at ATR [3][4]. The corpora express joy, anger and sadness. For the previous trial, speech corpora were made with female voice. Having added speech corpora(More)
We propose a new approach to synthesizing emotional speech by a corpus-based concatenative speech synthesis system (ATR CHATR) using speech corpora of emotional speech. In this study, neither emotional-dependent prosody prediction nor signal processing per se is performed for emotional speech. Instead, a large speech corpus is created per emotion to(More)
This paper reports on our research on designing a speech corpora in Japanese for a concatenative speech synthesis system that is to be used for a specific purpose. For this work, the purpose was set to assist communication for non-vocal people. Four kinds of source database for synthesis were developed by combining different speech corpora created from read(More)
This paper outlines our approach for describing multivariate environmental information such as weather as it might be characterized by humans using connotations and delicate nuances. The purpose of this research is to achieve smooth human-machine spoken dialogue. The key feature of our approach is the use of a vector-based method, a widely used technique in(More)
This paper reports on the development of an English speech synthesis system for a Japanese amyotropic lateral sclerosis patient as part of the project of developing a bilingual communication aid for this patient. The patient had a tracheotomy three years ago and anticipates the possibility of losing his phonatory function. His English speech database for(More)
This paper reports on the development of Chatako-AID, a communication aid for non-vocal people using corpus-based cocatenative speech synthesis by creating a speech corpus especially designed for such use. The concept of Chatako-AID; synthesis with the user’s voice, which makes use of precomposed texts, is highly appreciated by the target user. This(More)
  • 1