Learn More
— This paper presents a concept of a smart working environment designed to allow true joint-actions of humans and industrial robots. The proposed system perceives its environment with multiple sensor modalities and acts in it with an industrial robot manipulator to assemble capital goods together with a human worker. In combination with the reactive(More)
This paper presents a system for another input modality in a multimodal human-machine interaction scenario. In addition to other common input modalities, e.g. speech, we extract head gestures by image interpretation techniques based on machine learning algorithms to have a nonverbal and familiar way of interacting with the system. Our experimental(More)
In this paper, we present a novel approach for multimodal interactions between humans and industrial robots. The application scenario is situated in a factory, where a human worker is supported by a robot to accomplish a given hybrid assembly scenario, that covers manual and automated assembly steps. The robot is acting as an assistant as well as a fully(More)
In this paper we present our approach for a new contact-free Human-Machine Interface (cfHMI). This cfHMI is designed for controlling applications – instruction presentation, robot control – in the so-called " Cognitive Factory Scenario " , introduced in [1]. However, the interface can be applied in other environments and areas of application as well. Due to(More)
The archetype of many novel research activities is called cog-nition. Although separate definitions exist to define a technical cognitive system, it is typically characterized by the (mental) process of knowing, including aspects such as awareness, perception, reasoning, and judgment. This especially includes the question of how to deal with previously(More)
This paper introduces a new visual tracking technique combining particle filtering and Dynamic Bayesian Networks. The particle filter is utilized to robustly track an object in a video sequence and gain sets of descriptive object features. Dynamic Bayesian Networks use feature sequences to determine different motion patterns. A Graphi-cal Model is(More)
In this paper, the development of a framework based on the Real-time Database (RTDB) for processing multimodal data is presented. This framework allows readily integration of input and output modules. Furthermore the asynchronous data streams from different sources can be approximately processed in a synchronous manner. Depending on the included modules,(More)
—Everyday human communication relies on a large number of different communication mechanisms like spoken language, facial expressions, body pose and gestures, allowing humans to pass large amounts of information in short time. In contrast, traditional human-machine communication is often unintuitive and requires specifically trained personal. In this paper,(More)
In everyday live head gestures such as head shaking or nodding and hand gestures like pointing gestures form important aspects of human-human interaction. Therefore, recent research considers integrating these intuitive communication cues into technical systems for improving and easing human-computer interaction. In this paper we present a vision-based(More)