Learn More
We describe a system for visual interaction developed for humanoid robots. It enables the robot to interact with its environment using a smooth whole body motion control driven by stabilized visual targets. Targets are defined as visually extracted "proto-objects" and behavior-relevant object hypotheses and are stabilized by means of a short-term sensory(More)
We propose a simple but efficient control strategy to manipulate objects of unknown shape, weight, and friction properties — prerequisites which are necessary for classical offline grasping and manipulation methods. Instead, the proposed control strategy employs estimated contact point locations, which can be obtained from modern tactile sensors with(More)
— We introduce our latest autonomous learning and interaction system instance ALIS 2. It comprises different sensing modalities for visual (depth blobs, planar surfaces, motion) and auditory (speech, localization) signals and self-collision free behavior generation on the robot ASIMO. The system design emphasizes the split into a completely autonomous(More)
— The contribution of this paper is twofold. First, we present a new conceptual framework for modeling incre-mental hierarchical behavior control systems for humanoids. The biological motivation and the key elements are discussed. Second, we show our current instance of such a behavior control system, called ALIS. It is designed according to the concepts(More)
Man-made real-world environments are dominated by planar surfaces many of which constitute behavior-relevant entities. Thus, the ability to perceive planar surfaces is vital for any embodied system operating in such environments, be it human or robotic. In this paper, we present an architecture for detection and estimation of planar surfaces in the scene(More)
A major step towards intelligent vehicles lies in the acquisition of an environmental representation of sufficient generality to serve as the basis for a multitude of different assistance-relevant tasks. This acquisition process must reliably cope with the variety of environmental changes inherent to traffic environments. As a step towards this goal, we(More)
— A stable perception of the environment is a crucial prerequisite for researching the learning of semantics from human-robot interaction and also for the generation of behavior relying on the robots perception. In this paper, we propose several contributions to this research field. To organize visual perception the concept of proto-objects is used for the(More)