Tsubasa Shinozaki

Learn More
Despite successes, there are still significant limitations to speech recognition performance, particularly for conversational speech and/or for speech with significant acoustic degradations from noise or reverberation. For this reason, authors have proposed methods that incorporate different (and larger) analysis windows, which are described in this(More)
This paper proposes an unsupervised, batch-type, class-based language model adaptation method for spontaneous speech recognition. The word classes are automatically determined by maximizing the average mutual information between the classes using a training set. A class-based language model is built based on recognition hypotheses obtained using a general(More)
This paper proposes a Computer Assisted Instruction (CAI) system that teaches students how to write Japanese characters. The most important feature of the system is the usage of synthesized speech to interact with users. The CAI system has a video display tablet interface. A user traces a pattern of a character using the tablet pen, and simultaneously his(More)
To improve the performance of call-reason analysis at contact centers, we introduce a novel method to extract call-reason segments from dialogs. It is based on the following two characteristics of contact center conversations; 1) customers state their requests at the beginning of the calls, 2) agents tend to use typical phrases at the end of the call-reason(More)
We have been developing a series of Sesign (speech design tools), TTS systems with the special function of manipulating prosodic parameters via a GUI (Graphical User Interface). All are intended to help the user create speech messages in a trial-and-error manner. This paper reports the following three advances in Sesign. (1) To extend the scope of Sesign,(More)
  • 1