A novel method for multi-sensory data fusion in multimodal human computer interaction

@inproceedings{Sun2006ANM,
  title={A novel method for multi-sensory data fusion in multimodal human computer interaction},
  author={Yong Sun and Fang Chen and Yu Shi and Vera Chung},
  booktitle={OZCHI},
  year={2006}
}
Multimodal User Interaction (MMUI) technology aims at building natural and intuitive interfaces allowing a user to interact with computer in a way similar to human-to-human communication, for example, through speech and gestures. As a critical component in MMUI, Multimodal Input Fusion explores ways to effectively interpret the combined semantic interpretation of user inputs through multiple modalities. This paper presents a novel approach to multi-sensory data fusion based on speech and manual… CONTINUE READING

Citations

Publications citing this paper.
Showing 1-10 of 10 extracted citations

An Efficient Multimodal Language Processor for Parallel Input Strings in Multimodal Input Fusion

International Conference on Semantic Computing (ICSC 2007) • 2007
View 1 Excerpt

References

Publications referenced by this paper.

Parsing Algorithm to Reduce Copying in Prolog

G. Penn
In Arbeitspapiere des SFB 340, • 1999
View 4 Excerpts
Highly Influenced

Similar Papers

Loading similar papers…