Learn More
The possibility of speech processing in the absence of an intelligible acoustic signal has given rise to the idea of a 'silent speech' interface , to be used as an aid for the speech-handicapped, or as part of a communications system operating in silence-required or high-background noise environments. The article first outlines the emergence of the silent(More)
The article compares two approaches to the description of ultrasound vocal tract images for application in a "silent speech interface," one based on tongue contour modeling, and a second, global coding approach in which images are projected onto a feature space of Eigentongues. A curvature-based lip profile feature extraction method is also presented.(More)
This paper presents recent developments on our " silent speech interface " that converts tongue and lip motions, captured by ultrasound and video imaging, into audible speech. In our previous studies, the mapping between the observed articulatory movements and the resulting speech sound was achieved using a unit selection approach. We investigate here the(More)
We conducted a study within the framework of the interdisciplinary European Mercury Emission from Chloralkali Plants (EMECAP) project to assess exposure to mercury (Hg) and the contribution of Hg emissions from a mercury cell chloralkali plant to urinary mercury (U-Hg) in adults living near the plant. We collected data from questionnaires and first morning(More)
Silent Speech Interfaces have been proposed for communication in silent conditions or as a new means of restoring the voice of persons who have undergone a laryngectomy. To operate such a device, the user must articulate silently. Isolated word recognition tests performed with fixed and portable ultrasound based silent speech interface equipment show that(More)
This article presents a segmental vocoder driven by ultrasound and optical images (standard CCD camera) of the tongue and lips for a " silent speech interface " application, usable either by a laryngectomized patient or for silent communication. The system is built around an audio–visual dictionary which associates visual to acoustic observations for each(More)
The article describes a video-only speech recognition system for a " silent speech interface " application, using ultrasound and optical images of the voice organ. A one-hour audiovisual speech corpus was phonetically labeled using an automatic speech alignment procedure and robust visual feature extraction techniques. HMM-based stochastic models were(More)
A machine learning technique is used to match reconstructed tongue contours in 30 frame per second ultrasound images to speaker vocal tract parameters obtained from a synchronized audio track. Speech synthesized using the learned parameters and noise as an activation function displays many of the time and frequency domain characteristics of the original(More)
Latest results on continuous speech phone recognition from video observations of the tongue and lips are described in the context of an ultrasound-based silent speech interface. The study is based on a new 61-minute audiovisual database containing ultrasound sequences of the tongue as well as both frontal and lateral view of the speaker's lips. Phonetically(More)
Government nor any agency thereof nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any(More)