Joint interpretation of input speech and pen gestures for multimodal human-computer interaction

@inproceedings{Hui2006JointIO,
  title={Joint interpretation of input speech and pen gestures for multimodal human-computer interaction},
  author={Pui-Yu Hui and Helen M. Meng},
  booktitle={INTERSPEECH},
  year={2006}
}
This paper describes out initial work in semantic interpretation of multimodal user input that consist of speech and pen gestures. We have designed and collected a multimodal corpus of over a thousand navigational inquiries around the Beijing area. We devised a processing sequence for extracting spoken references from the speech input (perfect transcripts) and interpreting each reference by generating a hypothesis list of possible semantics (i.e. locations). We also devised a processing… CONTINUE READING

From This Paper

Figures, tables, and topics from this paper.

Citations

Publications citing this paper.
SHOWING 1-10 OF 12 CITATIONS

References

Publications referenced by this paper.
SHOWING 1-7 OF 7 REFERENCES

A Probabilistic Approach to Reference Resolution in Multimodal User Interfaces

  • J. Chai
  • Proc . of IUI
  • 2004

Multimodal Integration – A Statistical View

  • W. Wahlster, M. Johnston
  • 1993

The Mathematics of Machine Translation : Parameter Estimation

  • P. Brown
  • Computational Linguistics

Similar Papers

Loading similar papers…