Learn More
We hypothesize that the performance of multimodal perceptive user interfaces during multi-party interaction may be improved by using facial orientation of users as a cue for identifying the addressee of a user utterance. Multi-party interactions were collected in a user test where one participant would both interact with an information kiosk and negotiate(More)
Against the background of developments in the area of speech-based and multimodal interfaces, we present research on determining the addressee of an utterance in the context of mixed human-human and multimodal human-computer interaction. Working with data that are taken from realistic scenarios, we explore several features with respect to their relevance to(More)
In the MATIS project a multimodal system has been developed for train timetable information. The aim of the project was to obtain guidelines for designing multimodal interfaces for information systems. The MATIS system accepts input both in spoken and in graphical mode (no keyboard input) and provides feedback in the same two modes. The user can choose at(More)
In this paper the effect of prolonged use on interaction with a multimodal sys­ tem is studied. The system accepts spoken input as well as pointing input and provides output both in speech and in graphics. We measured the usability of the system in a pre-test / post-test design and made a detailed analysis of the changes in interaction styles. The results(More)
The aim of the study presented in this paper was to compare the usability of a user driven and a mixed initiative user interface of a multimodal system for train timetable information. The evaluation shows that the effectiveness of the two interfaces does not differ significantly. However, as a result of the absence of spoken prompts and the obligatory use(More)
  • 1