Jürgen Gast

Learn More
This paper presents a concept of a smart working environment designed to allow true joint-actions of humans and industrial robots. The proposed system perceives its environment with multiple sensor modalities and acts in it with an industrial robot manipulator to assemble capital goods together with a human worker. In combination with the reactive behavior(More)
We present the MuDiS project. The main goal of MuDiS is to develop a Multimodal Dialogue System that can be adapted quickly to a wide range of various scenarios. In this interdisciplinary project, we unite researchers from diverse areas, including computational linguistics, computer science, electrical engineering, and psychology. The different research(More)
This paper presents a new framework for multimodal data processing in real-time. This framework comprises modules for different input and output signals and was designed for human-human or human-robot interaction scenarios. Single modules for the recording of selected channels like speech, gestures or mimics can be combined with different output options(More)
In this paper we present a framework for realtime processing of multimodal data, which can be used for onand off-line processing of perceived data in interactions. We propose the use of a framework based on the Real-time Database (RTDB). This framework allows easy integration of input and output modules and thereby concentrating on the core functionality of(More)
In this paper, we present a novel approach for multimodal interactions between humans and industrial robots. The application scenario is situated in a factory, where a human worker is supported by a robot to accomplish a given hybrid assembly scenario, that covers manual and automated assembly steps. The robot is acting as an assistant as well as a fully(More)
This paper presents a system for another input modality in a multimodal human-machine interaction scenario. In addition to other common input modalities, e.g. speech, we extract head gestures by image interpretation techniques based on machine learning algorithms to have a nonverbal and familiar way of interacting with the system. Our experimental(More)
In this paper we present our approach for a new contactfree Human-Machine Interface (cfHMI). This cfHMI is designed for controlling applications – instruction presentation, robot control – in the so-called ”Cognitive Factory Scenario”, introduced in [1]. However, the interface can be applied in other environments and areas of application as well. Due to its(More)
This paper introduces a new visual tracking technique combining particle filtering and Dynamic Bayesian Networks. The particle filter is utilized to robustly track an object in a video sequence and gain sets of descriptive object features. Dynamic Bayesian Networks use feature sequences to determine different motion patterns. A Graphical Model is(More)
Everyday human communication relies on a large number of different communication mechanisms like spoken language, facial expressions, body pose and gestures, allowing humans to pass large amounts of information in short time. In contrast, traditional human-machine communication is often unintuitive and requires specifically trained personal. In this paper,(More)