Robert Neßelrath

Learn More
We present two user interfaces: one multimodal dialogue system and one task-based calendar which assist people with mild cognitive disabilities affecting their concentration and memory. A new middleware based upon a new open industrial standard—ISO/IEC 24752 Universal Remote Console (URC)—allows access to any network services or appliances as well as(More)
This paper introduces a system for gesture based interaction with smart environments. The framework we present connects gesture recognition results with control commands for appliances in a smart home that are accessed through a mid-dleware based on the ISO 24752 standard URC (Universal Remote Console). Gesture recognition is realized by applying three(More)
In this paper, we describe a mobile Business-to-Business (B2B) interaction system. The mobile device supports users in accessing a service platform. A multimodal dialogue system allows a business expert to intuitively search and browse for services in a real-world production pipeline. We implemented a distributed client-server dialogue application for(More)
This paper describes the design and the design process of orthographic feedback in a computer-assisted vocabulary learning (CAVL) application that is targeted at blind language learners. It discusses current research findings of vocabulary and spelling acquisition, as well as special needs of blind computer users. CAVL applications often assume the user's(More)
This paper introduces a multimodal dialogue system, which facilitates access to the digital home for people who suffer from cognitive disabilities. The user interface is implemented on a smartphone and allows interaction via speech and pointing gestures. A consistent control concept and a meaningful graphical design allow an intuitive handling of the(More)
We aim to enhance the quality of life of elderly and people with mild cognitive disabilities through technology. This includes increasing their autonomy, enhancing their security, and preventing isolation by staying socially connected. One way to do so is enabling them a life in their familiar surroundings as long as possible. We present the successful(More)
A significant aim in developing multimodal HCI for the automotive domain is to keep driver's distraction low. However, the measurement of the cognitive load is difficult and inaccurate but an approach to predict the effect of dialogue and presentation strategies on this is promising. In this paper we discuss cognitive load in theory and related work, and(More)
We present a framework for integrating dynamic gestures as a new input modality into arbitrary applications. The framework allows training new gestures and recognizing them as user input with the help of machine learning algorithms. The precision of the gesture recognition is evaluated with special attention to the elderly. We show how this functionality is(More)
Intelligent Environments are highly interactive by integrating information and communication technology into the physical space. One goal is to provide user interfaces that are adaptive to the user and the environmental context, including the communication modalities. We present a new development platform for multimodal dialogue systems. A development(More)