Emre Baseski

Learn More
Skeletal trees are commonly used in order to express geometric properties of the shape. Accordingly, tree edit distance is used to compute a dissimilarity between two given shapes. We present a new tree edit based shape matching method which uses a recent coarse skeleton representation. The coarse skeleton representation allows us to represent both shapes(More)
We describe a bootstrapping cognitive robot system that—mainly based on pure exploration—acquires rich object representations and associated object-specific grasp affordances. Such bootstrapping becomes possible by combining innate competences and behaviors by which the system gradually enriches its internal representations, and thereby develops an(More)
We describe an embodied cognitive system based on a three-level architecture that includes a sensorimotor layer, a mid-level layer that stores and reasons about object-action episodes, and a high-level symbolic planner that creates abstract action plans to be realised and possibly further specified by the lower levels. The system works in two modes,(More)
We discuss the need of an elaborated in-between stage bridging early vision and cognitive vision which we call ‘Early Cognitive Vision’ (ECV). This stage provides semantically rich, disambiguated and largely task independent scene representations which can be used in many contexts. In addition, the ECV stage is important for generalization processes across(More)
In this work, we address the problem of 3D circle detection in a hierarchical representation which contains 2D and 3D information in the form of multi-modal primitives and their perceptual organizations in terms of contours. Semantic reasoning on higher levels leads to hypotheses that then become verified on lower levels by feedback mechanisms. The effects(More)
In this work, we describe and evaluate a grasping mechanism that does not make use of any specific object prior knowledge. The mechanism makes use of second-order relations between visually extracted multi–modal 3D features provided by an early cognitive vision system. More specifically, the algorithm is based on two relations covering geometric information(More)
We develop means of learning and representing object grasp affordances probabilistically. By grasp affordance, we refer to an entity that is able to assess whether a given relative object-gripper configuration will yield a stable grasp. These affordances are represented with grasp densities, continuous probability density functions defined on the space of(More)
In this work we refine an initial grasping behavior based on 3D edge information by learning. Based on a set of autonomously generated evaluated grasps and relations between the semi-global 3D edges, a prediction function is learned that computes a likelihood for the success of a grasp using either an offline or an online learning scheme. Both methods are(More)
We describe a system for autonomous learning of visual object representations and their grasp affordances on a robot-vision system. It segments objects by grasping and moving 3D scene features, and creates probabilistic visual representations for object detection, recognition and pose estimation, which are then augmented by continuous characterizations of(More)