Shinichi Homma

Learn More
This paper describes two new methods, online speech detection and dual-gender speech recognition, for captioning broadcast news. The proposed online speech detection performs dualgender phoneme recognition and detects a start-point and an end-point based on the ratio between the cumulative phoneme likelihood and the cumulative non-speech likelihood with a(More)
This paper describes a novel method of rescoring that reflects tendencies of errors in word hypotheses in speech recognition for transcribing broadcast news, including ill-trained spontaneous speech. The proposed rescoring assigns penalties to sentence hypotheses according to the recognition error tendencies in the training lattices themselves using a set(More)
The extraction of acoustic features for robust speech recognition is very important for improving its performance in realistic environments. The bi-spectrum based on the Fourier transformation of the third-order cumulants expresses the non-Gaussianity and the phase information of the speech signal, showing the dependency between frequency components. In(More)
There is a great need for more TV programs to be closed-captioned to help hearing impaired and elderly people watch TV. For that purpose, automatic speech recognition is expected to contribute to providing text from speech in real-time. NHK has been using speech recognition for closed-captioning of some of its news, sports and other live TV programs. In(More)
There is a great need for more TV programs to be subtitled to help hearing impaired and elderly people to watch TV. NHK has researched automatic speech recognition for subtitling live TV programs in real time efficiently. Our speech recognition system learns frequent words and expressions expected in the program beforehand and also learns characteristics of(More)
It is desirable to consistently and seamlessly update a language model of speech recognition without stopping it for online applications such as real-time closed-captioning. This paper proposes a novel speech recognition system that enables the model to be updated at any time even while it is running. It can run the second decoder with the latest model in(More)
A new real-time closed-captioning system for Japanese broadcast news programs is described. The system is based on a hybrid automatic speech recognition system that switches input speech between the original program sound and the rephrased speech by a ”re-speaker”. It minimises the number of correction operators, generally to one or two, depending on the(More)
This paper describes a lattice-based risk minimization training method for unsupervised language model (LM) adaptation. In a broadcast archiving system, unsupervised LM adaptation using transcriptions generated by speech recognition is considered to be useful for improving the performance. However, conventional linear interpolation methods occasionally(More)
Low-latency speaker diarization is desirable for online-oriented speaker adaptation in real-time speech recognition. Especially in spontaneous conversations, several speakers tend to speak alternatively and continuously without any silence in between utterances. We therefore propose a speaker diarization method that detects speaker-change points and(More)