Embodied Conversational Agents: Computing and Rendering Realistic Gaze Patterns


We describe here our efforts for modeling multimodal signals exchanged by interlocutors when interacting face-to-face. This data is then used to control embodied conversational agents able to engage into a realistic faceto-face interaction with human partners. This paper focuses on the generation and rendering of realistic gaze patterns. The problems encountered and solutions proposed claim for a stronger coupling between research fields such as audiovisual signal processing, linguistics and psychosocial sciences for the sake of efficient and realistic human-computer interaction.

DOI: 10.1007/11922162_2

Extracted Key Phrases

7 Figures and Tables

Cite this paper

@inproceedings{Bailly2006EmbodiedCA, title={Embodied Conversational Agents: Computing and Rendering Realistic Gaze Patterns}, author={G{\'e}rard Bailly and Fr{\'e}d{\'e}ric Elisei and Stephan Raidt and Alix Casari and Antoine Picot}, booktitle={PCM}, year={2006} }