Integration of Speech and Gesture Inputs during Multimodal Interaction

@inproceedings{Epps2004IntegrationOS,
  title={Integration of Speech and Gesture Inputs during Multimodal Interaction},
  author={Julien Epps and Sharon L. Oviatt and Fang Chen},
  year={2004}
}
Speech and gesture are two types of multimodal inputs that can be used to facilitate more natural humanmachine interaction in applications for which the traditional keyboard and mouse input mechanisms are inappropriate, however the possibility of their concurrent use raises the issue of how best to fuse the two inputs. This paper analyses data collected from a speech and manual gesture-based digital photo management application scenario, and from this derives assumptions and fusion thresholds… CONTINUE READING