• Publications
  • Influence
Silent speech interfaces
TLDR
The possibility of speech processing in the absence of an intelligible acoustic signal has given rise to a 'silent speech' interface, to be used as an aid for the speech-handicapped, or as part of a communications system operating in silence-required or high-background-noise environments. Expand
  • 326
  • 24
  • PDF
Development of a silent speech interface driven by ultrasound and optical images of the tongue and lips
TLDR
This article presents a segmental vocoder driven by ultrasound and optical images (standard CCD camera) of the tongue and lips for a silent speech interface application, usable either by a laryngectomized patient or for silent communication. Expand
  • 121
  • 8
  • PDF
Acquisition of Ultrasound, Video and Acoustic Speech Data for a Silent-Speech Interface Application
TLDR
This article addresses synchronous acquisition of high-speed multimodal speech data, composed of ultrasound and optical images of the vocal tract together with the acoustic speech signal. Expand
  • 69
  • 7
  • PDF
Biosignal-Based Spoken Communication: A Survey
TLDR
Biosignal-based Spoken Communication is a wide and very active field at the intersection of various disciplines, ranging from engineering, computer science, electronics and machine learning to medicine, neuroscience, physiology, and psychology. Expand
  • 61
  • 6
  • PDF
Eigentongue Feature Extraction for an Ultrasound-Based Silent Speech Interface
TLDR
The article compares two approaches to the description of ultrasound vocal tract images for application in a "silent speech interface," one based on tongue contour modeling, and a second, global coding approach in which images are projected onto a feature space of Eigentongues. Expand
  • 81
  • 5
  • PDF
Real-Time Control of an Articulatory-Based Speech Synthesizer for Brain Computer Interfaces
TLDR
We present here an articulatory-based speech synthesizer that can be controlled in real-time for future BCI applications. Expand
  • 41
  • 5
  • PDF
Statistical conversion of silent articulation into audible speech using full-covariance HMM
TLDR
Comparison of GMM and full-covariance phonetic HMM without vocabulary limitation.Conversion of silent articulation captured by ultrasound and video to modal speech. Expand
  • 35
  • 3
  • PDF
Statistical Mapping Between Articulatory and Acoustic Data for an Ultrasound-Based Silent Speech Interface
TLDR
This paper presents recent developments on our “silent speech interface” that converts tongue and lip motions, captured by ultrasound and video imaging, into audible speech. Expand
  • 33
  • 3
  • PDF
Phone recognition from ultrasound and optical video sequences for a silent speech interface
TLDR
A visual phone recognizer predicts a target phonetic sequence from a continuous stream of visual features used to constrain a unit selection algorithm. Expand
  • 30
  • 2
  • PDF
Feature extraction using multimodal convolutional neural networks for visual speech recognition
TLDR
We investigate the use of convolutional neural networks (CNN) to extract visual features directly from the raw ultrasound and video images. Expand
  • 26
  • 2