Corpus ID: 9940293

THE VERNISSAGE CORPUS: A MULTIMODAL HUMAN-ROBOT-INTERACTION DATASET

@inproceedings{Jayagopi2012THEVC,
  title={THE VERNISSAGE CORPUS: A MULTIMODAL HUMAN-ROBOT-INTERACTION DATASET},
  author={Dinesh Babu Jayagopi and Samira Sheikhi and D. Klotz and J. Wienke and J. Odobez and S. Wrede and Vasil Khalidov and L. Nguyen and B. Wrede and D. Gatica-Perez},
  year={2012}
}
  • Dinesh Babu Jayagopi, Samira Sheikhi, +7 authors D. Gatica-Perez
  • Published 2012
  • Computer Science
  • We introduce a new multimodal interaction dataset with extensive annotations in a conversational Human-Robot-Interaction (HRI) scenario. It has been recorded and annotated to benchmark many relevant perceptual tasks, towards enabling a robot to converse with multiple humans, such as speaker localization, key word spotting, speech recognition in audio domain; tracking, pose estimation, nodding, visual focus of attention estimation in visual domain; and an audio-visual task such as addressee… CONTINUE READING
    14 Citations
    The vernissage corpus: A conversational Human-Robot-Interaction dataset
    • 29
    • PDF
    Context aware addressee estimation for human robot interaction
    • 6
    Given that, should i respond? Contextual addressee estimation in multi-party human-robot interactions
    • 13
    • PDF
    Engagement detection based on mutli-party cues for human robot interaction
    • H. Salam, M. Chetouani
    • Psychology, Computer Science
    • 2015 International Conference on Affective Computing and Intelligent Interaction (ACII)
    • 2015
    • 10
    • Highly Influenced
    Tracking Gaze and Visual Focus of Attention of People Involved in Social Interaction
    • 36
    • PDF
    Simultaneous estimation of gaze direction and visual focus of attention for multi-person-to-robot interaction
    • 10
    • PDF
    A low cost personalised robot language tutor with perceptual and interaction capabilities

    References

    SHOWING 1-10 OF 36 REFERENCES
    Developing a ContextualizedMultimodal Corpus for Human-Robot Interaction
    • 24
    • PDF
    Identifying the addressee in human-human-robot interactions based on head pose and speech
    • 126
    • PDF
    The H3R Explanation Corpus human-human and base human-robot interaction dataset
    • 21
    The AMI Meeting Corpus: A Pre-announcement
    • 591
    • PDF
    Automatic detection of motion sequences for motion analysis
    • 2
    AV16.3: An Audio-Visual Corpus for Speaker Localization and Tracking
    • 124
    • PDF
    The CAVA corpus: synchronised stereoscopic and binaural datasets with head movements
    • 26
    • PDF
    Multi-party focus of attention recognition in meetings from head pose and multimodal contextual cues
    • Sileye O. Ba, J. Odobez
    • Computer Science
    • 2008 IEEE International Conference on Acoustics, Speech and Signal Processing
    • 2008
    • 44
    • PDF
    A multimodal annotated corpus of consensus decision making meetings
    • 63
    • Highly Influential
    The CLEAR 2007 Evaluation
    • 63
    • PDF