Inter-ACT: an affective and contextually rich multimodal video corpus for studying interaction with robots

Abstract

The Inter-ACT (INTEracting with Robots - Affect Context Task) corpus is an affective and contextually rich multimodal video corpus containing affective expressions of children playing chess with an iCat robot. It contains videos that capture the interaction from different perspectives and includes synchronised contextual information about the game and the behaviour displayed by the robot. The Inter-ACT corpus is mainly intended to be a comprehensive repository of naturalistic and contextualised, task-dependent data for the training and evaluation of an affect recognition system in an educational game scenario. The richness of contextual data that captures the whole human-robot interaction cycle, together with the fact that the corpus was collected in the same interaction scenario of the target application, make the Inter-ACT corpus unique in its genre.

DOI: 10.1145/1873951.1874142

Extracted Key Phrases

4 Figures and Tables

Cite this paper

@inproceedings{Castellano2010InterACTAA, title={Inter-ACT: an affective and contextually rich multimodal video corpus for studying interaction with robots}, author={Ginevra Castellano and Iolanda Leite and Andr{\'e} Pereira and Carlos Martinho and Ana Paiva and Peter W. McOwan}, booktitle={ACM Multimedia}, year={2010} }