Learn More
—This paper addresses the issue of learning and representing object grasp affordances, i.e. object-gripper relative configurations that lead to successful grasps. The purpose of grasp affordances is to organize and store the whole knowledge that an agent has about the grasping of an object, in order to facilitate reasoning on grasping solutions and their(More)
Skeletal trees are commonly used in order to express geometric properties of the shape. Accordingly, tree edit distance is used to compute a dissimilarity between two given shapes. We present a new tree edit based shape matching method which uses a recent coarse skeleton representation. The coarse skeleton representation allows us to represent both shapes(More)
In this work, we describe and evaluate a grasping mechanism that does not make use of any specific object prior knowledge. The mechanism makes use of second-order relations between visually extracted multi–modal 3D features provided by an early cognitive vision system. More specifically, the algorithm is based on two relations covering geometric information(More)
—We describe a bootstrapping cognitive robot system that—mainly based on pure exploration—acquires rich object representations and associated object-specific grasp affordances. Such bootstrapping becomes possible by combining innate com-petences and behaviors by which the system gradually enriches its internal representations, and thereby develops an(More)
We develop means of learning and representing object grasp af-fordances probabilistically. By grasp affordance, we refer to an entity that is able to assess whether a given relative object-gripper configuration will yield a stable grasp. These affordances are represented with grasp densities, continuous probability density functions defined on the space of(More)
— We describe an embodied cognitive system based on a three-level architecture that includes a sensorimotor layer, a mid-level layer that stores and reasons about object-action episodes, and a high-level symbolic planner that creates abstract action plans to be realised and possibly further specified by the lower levels. The system works in two modes,(More)
In this work we refine an initial grasping behavior based on 3D edge information by learning. Based on a set of autonomously generated evaluated grasps and relations between the semi-global 3D edges, a prediction function is learned that computes a likelihood for the success of a grasp using either an offline or an online learning scheme. Both methods are(More)
We describe a system for autonomous learning of visual object representations and their grasp affordances on a robot-vision system. It segments objects by grasping and moving 3D scene features, and creates probabilistic visual representations for object detection, recognition and pose estimation, which are then augmented by continuous characterizations of(More)
Keywords: Cognitive vision Contour representation 3D contours Contour relations Perceptual relations 3D reasoning Driver assistance Grasping a b s t r a c t In this work, we make use of 3D contours and relations between them (namely, coplanarity, cocolority, distance and angle) for four different applications in the area of computer vision and vision-based(More)
With the availability of high-resolution commercial satellite images, automated analysis and object extraction became even a more important topic in remote sensing. As shadows cover a significant portion of an image, they play an important role on automated analysis. While they degrade performance of applications such as image registration, shadow is an(More)