Matthias Denecke

Learn More
We implemented a spoken dialogue system architecture for rapid prototyping. The features that support rapid prototyping include a clear separation of generic dialogue processing algorithms from domain and language speci c knowledge sources. In an experiment, it could be shown that six individuals could specify these domain and language speci c knowledge(More)
Emotions are very important in human-human communication but are usually ignored in human-computer interaction. Recent work focuses on recognition and generation of emotions as well as emotion driven behavior. Our work focuses on the use of emotions in dialogue systems that can be used with speech input or as well in multi-modal environments.This paper(More)
As computational and communications systems become increasingly smaller, faster, more powerful, and more integrated, the goal of interactive, integrated meeting support rooms is slowly becoming reality. It is already possible, for instance, to rapidly locate task-related information during a meeting, filter it, and share it with remote users. Unfortunately,(More)
The learning of dialogue strategies in spoken dialogue systems using reinforcement learning is a promising approach to acquire robust dialogue strategies. However, the trade-off between available dialogue data and information in the dialogue state either forces information to be excluded from the state representations or requires large amount of training(More)
We introduce multidimensional feature structures as a generalization of standard slot/filler representations commonly employed in spoken language dialogue systems. Nodes in multidimensional feature structures contain an dimensional vector of values instead of one single filler element. The additional elements serve to represent, among other information,(More)
In this paper, we present our e orts towards developing an intelligent tourist system. The system is equipped with a unique combination of sensors and software. The hardware includes two computers, a GPS receiver, a lapel microphone plus an earphone, a video camera and a head-mounted display. This combination enables a multimodal interface to take advantage(More)
Much work has been done in dialogue modeling for Human Computer Interaction. Problems arise in situations where disambiguation of highly ambiguous data base output is necessary. We propose to model the task rather than the dialogue itself. Furthermore, we propose underspeci ed representations to represent relevant data and to serve as a base for generating(More)
This paper describes our latest efforts in building a speech recognizer for operating a navigation system through speech instead of typed input. Compared to conventional speech recognition for navigation systems, where the input is usually restricted to a fixed set of keywords and keyword phrases, complete spontaneous sentences are allowed as speech input.(More)