VACE Multimodal Meeting Corpus

@inproceedings{Chen2005VACEMM,
  title={VACE Multimodal Meeting Corpus},
  author={Lei Chen and R. Rose and Ying Qiao and Irene Kimbara and Fey Parrill and Haleema Welji and Tony X. Han and Jilin Tu and Zhongqiang Huang and Mary P. Harper and Francis K. H. Quek and Yingen Xiong and David McNeill and Ronald Tuttle and Thomas S. Huang},
  booktitle={MLMI},
  year={2005}
}
In this paper, we report on the infrastructure we have developed to support our research on multimodal cues for understanding meetings. With our focus on multimodality, we investigate the interaction among speech, gesture, posture, and gaze in meetings. For this purpose, a high quality multimodal corpus is being produced. 
Highly Cited
This paper has 76 citations. REVIEW CITATIONS
48 Citations
35 References
Similar Papers

Citations

Publications citing this paper.
Showing 1-10 of 48 extracted citations

77 Citations

051015'07'10'13'16
Citations per Year
Semantic Scholar estimates that this publication has 77 citations based on the available data.

See our FAQ for additional information.

References

Publications referenced by this paper.
Showing 1-10 of 35 references

Speech and non-speech detection in meeting audio for transcription

  • Z. Huang, M. Harper
  • EuroSpeech
  • 2005
1 Excerpt

Vector coherence mapping: Motion field extraction by exploiting multiple coherences

  • F. Quek, R. Bryll, Y. Qiao, T. Rose
  • CVIU special issue on Spatial Coherence in Visual…
  • 2005
1 Excerpt

Similar Papers

Loading similar papers…