Learn More
We present the application of statistical language modeling methods for the prediction of the next dialogue act. This prediction is used by dierent modules of the speech-to-speech translation system VERBMOBIL. The statistical approach uses deleted interpolation of n-gram frequencies as basis and determines the interpolation weights by a modied version of(More)
Embodied conversational agents provide a promising option for presenting information to users. This contribution revisits a number of past and ongoing systems with animated characters that have been developed at DFKI. While in all systems the purpose of using characters is to convey information to the user, there are significant variations in the style of(More)
Natural multimodal interaction with realistic virtual characters provides rich opportunities for entertainment and education. In this paper we present the current V<sc>IRTUAL</sc>H<sc>UMAN</sc> demonstrator system. It provides a knowledge-based framework to create interactive applications in a multi-user, multi-agent setting. The behavior of the virtual(More)
In this paper we describe how to generate affective dialogs for multiple virtual characters based on a combination of both automatically generated and pre-scripted scenes. This is done by using the same technique for emotion elicitation and computation that takes either input from the human author in the form of appraisal and dialog act tags or from a(More)
In this article we describe our efforts to develop an Improvisational Platform that provides a possibility for real-time interaction between a user-controlled avatar and a group of synthetic actors in a 3D virtual world. The platform will allow to play around with different improvisational rules and even different settings. This will be accomplished by(More)
A growing number of research projects in academia and industry have recently started to develop lifelike agents as a new metaphor for highly personalised human-machine communication. A strong argument in favour of using such characters in the interface is the fact that they make human-computer interaction more enjoyable and allow for communication styles(More)
In this paper, we introduce a toolkit called SceneMaker for authoring scenes for adaptive, interactive performances. These performances are based on automatically generated and pre-scripted scenes which can be authored with the SceneMaker in a two-step approach: In step one, the scene flow is defined using cascaded finite state machines. In a second step,(More)