Sandrine Robbe

Learn More
Within the framework of a prospective ergonomic approach, we simulated two multimodal user interfaces, in order to study the usability of constrained vs spontaneous speech in a multimodal environment. The first experiment, which served as a reference, gave subjects the opportunity to use speech and gestures freely, while subjects in the second experiment(More)
We present two related empirical studies of the use of speech and gestures in simulated HCI environments. This research aims at providing designers of future multimodal interfaces for the general public with useful information on users' expectations and requirements. Results demonstrate the usability of tractable artificial command languages composed of(More)
This paper presents an approach to the optimization of acoustic cues used for stop identication in the context of an acoustic-phonetic decoding system which uses automatic acoustic event extractors (a formant tracking algorithm and a burst analyzer). The acoustic cues have been designed on the basis of acoustic studies on stops and spectrogram reading(More)
  • 1