Andreas Richtsfeld

Learn More
We present a framework for segmenting unknown objects in RGB-D images suitable for robotics tasks such as object search, grasping and manipulation. While handling single objects on a table is solved, handling complex scenes poses considerable problems due to clutter and occlusion. After pre-segmentation of the input image based on surface normals, surface(More)
This paper proposes an effective algorithm for recognizing objects and accurately estimating their 6DOF pose in scenes acquired by a RGB-D sensor. The proposed method is based on a combination of different recognition pipelines, each exploiting the data in a diverse manner and generating object hypotheses that are ultimately fused together in an Hypothesis(More)
The task of searching and grasping objects in cluttered scenes, typical of robotic applications in domestic environments requires fast object detection and segmentation. Attentional mechanisms provide a means to detect and prioritize processing of objects of interest. In this work, we combine a saliency operator based on symmetry with a segmentation method(More)
With the availability of cheap and powerful RGB-D sensors interest in 3D point cloud based methods has drastically increased. One common prerequisite of these methods is to abstract away from raw point cloud data, e.g. to planar patches, to reduce the amount of data and to handle noise and clutter. We present a novel method to abstract RGB-D sensor data to(More)
Gestalt principles have been studied for about a century and were used for various computer vision approaches during the last decades, but became unpopular because the many heuristics employed proved inadequate for many real world scenarios. We show a new methodology to learn relations inferred from Gestalt principles and an application to segment unknown(More)
We present a framework for detecting unknown 3D objects in RGBD-images and extracting representations suitable for robotics tasks such as grasping. We address cluttered scenes with stacked and jumbled objects where simplistic plane pop-out methods are not sufficient. We start by estimating surface patches using a mixture of planes and NURBS (non-uniform(More)
Semantic visual perception for knowledge acquisition plays an important role in human cognition, as well as in the learning process of any cognitive robot. In this paper, we present a visual information abstraction mechanism designed for continuously learning robotic systems. We generate spatial information in the scene by considering plane estimation and(More)
If a robot shall learn object affordances, the task is greatly simplified if visual data is abstracted from pixel data into basic shapes or Gestalts. This paper introduces a method of processing images to abstract basic features and into higher level Gestalts. Perceptual Grouping is formulated as incremental problem to avoid grouping parameters and to(More)
Structural scene understanding is an interconnected process wherein modules for object detection and supporting structure detection need to co-operate in order to extract cross-correlated information, thereby utilizing the maximum possible information rendered by the scene data. Such an inter-linked framework provides a holistic approach to scene(More)