Learn More
We consider the problem of grasping novel objects in cluttered environments. If a full 3-d model of the scene were available, one could use the model to estimate the stability and robustness of different grasps (formalized as form/force-closure, etc); in practice, however, a robot facing a novel object will usually be able to perceive only the front(More)
— Object search is an integral part of daily life, and in the quest for competent mobile manipulation robots it is an unavoidable problem. Previous approaches focus on cases where objects are in unknown rooms but lying out in the open, which transforms object search into active visual search. However, in real life, objects may be in the back of cupboards(More)
We present our vision-based system for grasping novel objects in cluttered environments. Our system can be divided into four components: 1) decide where to grasp an object, 2) perceive obstacles, 3) plan an obstacle-free path, and 4) follow the path to grasp the object. While most prior work assumes availability of a detailed 3-d model of the environment,(More)
— In state estimation, we often want the maximum likelihood estimate of the current state. For the commonly used joint multivariate Gaussian distribution over the state space, this can be efficiently found using a Kalman filter. However, in complex environments the state space is often highly constrained. For example, for objects within a refrigerator, they(More)
Autonomous mobile-manipulation robots need to sense and interact with objects to accomplish high-level tasks such as preparing meals and searching for objects. To achieve such tasks, robots need semantic world models, defined as object-based representations of the world involving task-level attributes. In this work, we address the problem of estimating(More)
— Spatial representations are fundamental to mobile robots operating in uncertain environments. Two frequently-used representations are occupancy grid maps, which only model metric information, and object-based world models, which only model object attributes. Many tasks represent space in just one of these two ways; however, because objects must be(More)
To accomplish tasks in human-centric indoor environments , agents need to represent and understand the world in terms of objects and their attributes. We consider how to acquire such a world model via noisy perception and maintain it over time, as objects are added, changed, and removed in the world. Previous work framed this as multiple-target tracking(More)
Mobile-manipulation robots performing service tasks in human-centric indoor environments has long been a dream for developers of autonomous agents. Tasks such as cooking and cleaning require interaction with the environment, hence robots need to know relevant aspects of their spatial surroundings. However, unlike the structured settings that industrial(More)
  • 1