Animating an Autonomous 3D Talking Avatar

  title={Animating an Autonomous 3D Talking Avatar},
  author={Dominik Borer and Dominik Lutz and Martine Guay},
One of the main challenges with embodying a conversational agent is annotating how and when motions can be played and composed together in real-time, without any visual artifact. The inherent problem is to do so---for a large amount of motions---without introducing mistakes in the annotation. To our knowledge, there is no automatic method that can process animations and automatically label actions and compatibility between them. In practice, a state machine, where clips are the actions, is… Expand
Identifying research gaps: A review of virtual patient education and self-management.
Interdisciplinary teams consisting of linguists, computer scientists, visual designers and health care professionals are required to go beyond a technology-centric solution design approach to explore which patterns and practices must be constructed visually, verbally, para- and nonverbally between humans and embodied machines in a counselling context. Expand


SmartBody: behavior realization for embodied conversational agents
SmartBody is presented, an open source modular framework for animating ECAs in real time, based on the notion of hierarchically connected animation controllers, which can employ arbitrary animation algorithms such as keyframe interpolation, motion capture or procedural animation. Expand
Fully Embodied Conversational Avatars: Making Communicative Behaviors Autonomous
It is argued that the modeling and animation of such fundamental behavior is crucial for the credibility and effectiveness of the virtual interaction in chat, and a method to automate the animation of important communicative behavior is proposed, deriving from work in conversation and discourse theory. Expand
Speech-driven Animation with Meaningful Behaviors
Objective and subjective evaluations demonstrate the benefits of the proposed approach over an unconstrained model over a rule-based system as a behavior realizer creating trajectories that are timely synchronized with speech. Expand
Gesture modeling and animation based on a probabilistic re-creation of speaker style
A system that, with a focus on arm gestures, is capable of producing full-body gesture animation for given input text in the style of a particular performer, which was successfully validated in an empirical user study. Expand
Audio to Body Dynamics
An LSTM network is built that is trained on violin and piano recital videos uploaded to the Internet and the predicted points are applied onto a rigged avatar to create the animation of an avatar. Expand
Gesture controllers
The modularity of the proposed method allows customization of a character's gesture repertoire, animation of non-human characters, and the use of additional inputs such as speech recognition or direct user control. Expand
Speaking with hands: creating animated conversational characters from recordings of human performance
By framing problems for utterance generation and synthesis so that they can draw closely on a talented performance, the techniques support the rapid construction of animated characters with rich and appropriate expression. Expand
Interactive motion generation from examples
This paper presents a framework that generates human motions by cutting and pasting motion capture data and can easily synthesize multiple motions that interact with each other using constraints, allowing a variety of choices for the animator. Expand
BEAT: the Behavior Expression Animation Toolkit
The Behavior Expression Animation Toolkit (BEAT) allows animators to input typed text that they wish to be spoken by an animated human figure, and to obtain as output appropriate and synchronizedExpand
A deep learning approach for generalized speech animation
A simple and effective deep learning approach to automatically generate natural looking speech animation that synchronizes to input speech and can also generate on-demand speech animation in real-time from user speech input. Expand