Szu-Chen Stan Jou

Learn More
We present our research on continuous speech recognition of the surface electromyographic signals that are generated by the human articulatory muscles. Previous research on electromyographic speech recognition was limited to isolated word recognition because it was very difficult to train phoneme-based acoustic models for the electromyographic speech(More)
This paper describes a study of automatically identifying whispering speakers. People usually whisper in order to avoid being identified or overheard by lowering their voices. The study compares performances between normal and whispered speech mode in clean and noisy environment under matched and mismatched training conditions, and describes the impact of(More)
In our previous work, we reported a surface electromyographic (EMG) continuous speech recognition system with a novel EMG feature extraction method, E4, which is more robust to EMG noise than traditional spectral features. In this paper, we show that articulatory feature (AF) classifiers can also benefit from the E4 feature, which improve the F-score of the(More)
In this paper we present first experiments towards a tighter coupling between Automatic Speech Recognition (ASR) and Statistical Machine Translation (SMT) to improve the overall performance of our speech translation system. In coventional speech translation systems, the recognizer outputs a single hypothesis which is then translated by the SMT system. This(More)
This paper describes various adaptation methods applied to recognizing soft whisper recorded with a throat microphone. Since the amount of adaptation data is small and the testing data is very different from the training data, a series of adaptation methods is necessary. The adaptation methods include: maximum likelihood linear regression, feature-space(More)
Electroencephalography (EEG)-based communication for situations in which normal speech may not be uttered has been investigated several times. Recently, experiments showed that besides giving simple commands i. e. to a computer, the recognition of actual unspoken words may also be feasible. Wavelet-based signal processing has been employed increasingly(More)
This paper describes our research on adaptation methods applied to articulatory feature detection on soft whispery speech recorded with a throat microphone. Since the amount of adaptation data is small and the testing data is very different from the training data, a series of adaptation methods is necessary. The adaptation methods include: maximum(More)
In this paper, we present an approach for articulatory feature classification based on surface electromyographic signals generated by the facial muscles. With parallel recorded audible speech and electromyographic signals, experiments are conducted to show the anticipatory behavior of electromyographic signals with respect to speech signals. On average, we(More)
We present our recent results on speech recognition by surface electromyography (EMG), which captures the electric potentials that are generated by the human articulatory muscles. This technique can be used to enable Silent Speech Interfaces, since EMG signals are generated even when people only articulate speech without producing any sound. Preliminary(More)