Learn More
In this paper, we introduce a toolkit called SceneMaker for authoring scenes for adaptive, interactive performances. These performances are based on automatically generated and pre-scripted scenes which can be authored with the SceneMaker in a two-step approach: In step one, the scene flow is defined using cascaded finite state machines. In a second step,(More)
We present the application of statistical language modeling methods for the prediction of the next dialogue act. This prediction is used by dierent modules of the speech-to-speech translation system VERBMOBIL. The statistical approach uses deleted interpolation of n-gram frequencies as basis and determines the interpolation weights by a modied version of(More)
Embodied conversational agents provide a promising option for presenting information to users. This contribution revisits a number of past and ongoing systems with animated characters that have been developed at DFKI. While in all systems the purpose of using characters is to convey information to the user, there are significant variations in the style of(More)
Natural multimodal interaction with realistic virtual characters provides rich opportunities for entertainment and education. In this paper we present the current V<sc>IRTUAL</sc>H<sc>UMAN</sc> demonstrator system. It provides a knowledge-based framework to create interactive applications in a multi-user, multi-agent setting. The behavior of the virtual(More)
In this paper we describe how to generate affective dialogs for multiple virtual characters based on a combination of both automatically generated and pre-scripted scenes. This is done by using the same technique for emotion elicitation and computation that takes either input from the human author in the form of appraisal and dialog act tags or from a(More)
A growing number of research projects in academia and industry have recently started to develop lifelike agents as a new metaphor for highly personalised human-machine communication. A strong argument in favour of using such characters in the interface is the fact that they make human-computer interaction more enjoyable and allow for communication styles(More)
This paper presents the NECA approach to the generation of dialogues between Embodied Conversational Agents (ECAs). This approach consist of the automated constructtion of an abstract script for an entire dialogue (cast in terms of dialogue acts), which is incrementally enhanced by a series of modules and finally " performed " by means of text, speech and(More)
CrossTalk is a self-explaining virtual character exhibition for public spaces. This paper presents the CrossTalk system, including its authoring tool Scene-Maker and the CarSales exhibit. CrossTalk extends the commonplace human-to-screen interaction to an interaction triangle. The user faces two separated screens inhabited with virtual characters and(More)
We present an extension of the CrossTalk system that allows to model emotional behaviour on three levels: scripting, processing and expression. CrossTalk is a self-explaining virtual character exhibition for public spaces. Its SceneMaker authoring suite provides authors with a screenplay-like language for scripting character and user interactions. This(More)