Learn More
We describe a Wizard-of-Oz experiment setup for the collection of multimodal interaction data for a Music Player application. This setup was developed and used to collect experimental data as part of a project aimed at building a flexible multimodal dialogue system which provides an interface to an MP3 player, combining speech and screen input and output.(More)
The development of an intelligent user interface that supports multimodal access to multiple applications is a challenging task. In this paper we present a generic multimodal interface system where the user interacts with an anthropomorphic personalized interface agent using speech and natural gestures. The knowledge-based and uniform approach of SmartKom(More)
The development of large-scale dialog systems requires a flexible architecture model and adequate software support to cope with the challenge of system integration. This contribution 1 presents a general framework for building integrated natural-language and multimodal dialog systems. Our approach relies on a distributed component model that enables(More)
In this work we describe a large-scale extrinsic evaluation of automatic speech summarization technologies for meeting speech. The particular task is a decision audit, wherein a user must satisfy a complex information need, navigating several meetings in order to gain an understanding of how and why a given decision was made. We compare the usefulness of(More)