Efficient grasping from RGBD images: Learning using a new rectangle representation

@article{Jiang2011EfficientGF,
  title={Efficient grasping from RGBD images: Learning using a new rectangle representation},
  author={Yun Jiang and Stephen Moseson and Ashutosh Saxena},
  journal={2011 IEEE International Conference on Robotics and Automation},
  year={2011},
  pages={3304-3311}
}
Given an image and an aligned depth map of an object, our goal is to estimate the full 7-dimensional gripper configuration—its 3D location, 3D orientation and the gripper opening width. Recently, learning algorithms have been successfully applied to grasp novel objects—ones not seen by the robot before. While these approaches use low-dimensional representations such as a ‘grasping point’ or a ‘pair of points’ that are perhaps easier to learn, they only partly represent the gripper configuration… 

Figures and Tables from this paper

Learning grasps with topographic features
We present a system for grasping unknown objects, even from piles or cluttered scenes, given a point cloud. Our method is based on the topography of a given scene and abstracts grasp-relevant
RGB Matters: Learning 7-DoF Grasp Poses on Monocular RGBD Images
TLDR
RGBD-Grasp is proposed, a pipeline that solves the grasp detection problem by decoupling 7-DoF grasp detection into two sub-tasks where RGB and depth information are processed separately and is robust to depth sensor noise.
Primitive Shape Recognition for Object Grasping
TLDR
The outcomes support the hypothesis that explicitly encoding shape primitives within a grasping pipeline should boost grasping performance, including task-free and task-relevant grasp prediction.
Learning from Successes and Failures to Grasp Objects with a Vacuum Gripper
TLDR
This work employs a Convolutional Neural Network that directly infers the grasping points and the approach angles from RGB-D images as a regression problem, and employs a self-supervised, data-driven learning approach to estimate a suitable grasp for known and unknown objects.
Deep learning for detecting robotic grasps
TLDR
This work presents a two-step cascaded system with two deep networks, where the top detections from the first are re-evaluated by the second, and shows that this method improves performance on an RGBD robotic grasping dataset, and can be used to successfully execute grasps on two different robotic platforms.
A Real-Time Robotic Grasping Approach With Oriented Anchor Box
TLDR
A vision-based, robust, and real-time robotic grasping approach with fully convolutional neural network with oriented anchor boxes as detection priors and the orientation anchor box mechanism to regress grasp angle based on predefined assumption instead of classification or regression without any priors is proposed.
A New Approach Based on Two-stream CNNs for Novel Objects Grasping in Clutter
TLDR
A deep learning approach is applied to solve the problem about grasping novel objects in clutter by proposing a ‘grasp circle’ method to find more potential grasps in each sampling point with less cost, which is parameterized by the size of the gripper.
Grasping of Unknown Objects Using Deep Convolutional Neural Networks Based on Depth Images
TLDR
The approach is able to handle full end-effector poses and therefore approach directions other than the view direction of the camera, and is not limited to a certain grasping setup (e. g. parallel jaw gripper) by design.
Grasp Evaluation With Graspable Feature Matching
We present an algorithm that attempts to identify object locations suitable for grasping using a parallel gripper, locations which we refer to as “graspable features”. As sensor input, we use point
Robotic grasp detection based on image processing and random forest
TLDR
This paper proposes a very quick and accurate approach to detect robotic grasps by developing a morphological image processing method to generate candidate grasp rectangles set which avoids us to search grasp Rectangles globally.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 43 REFERENCES
Robotic Grasping of Novel Objects using Vision
TLDR
This work considers the problem of grasping novel objects, specifically objects that are being seen for the first time through vision, and presents a learning algorithm that neither requires nor tries to build a 3-d model of the object.
Monocular depth perception and robotic grasping of novel objects
TLDR
This work presents an algorithm to convert standard digital pictures into 3D models, and applies its methods to robotics applications: (a) obstacle avoidance for autonomously driving a small electric car, and (b) robot manipulation, where it develops vision-based learning algorithms for grasping novel objects.
Learning Grasp Strategies with Partial Shape Information
TLDR
An approach to grasping is proposed that estimates the stability of different grasps, given only noisy estimates of the shape of visible portions of an object, such as that obtained from a depth sensor.
Learning to Grasp Novel Objects Using Vision
TLDR
A learning algorithm is presented which predicts, as a function of the images, the position at which to grasp the object, without building or requiring a 3-d model of the object.
Robotic Grasping of Novel Objects
TLDR
This work presents a learning algorithm that neither requires, nor tries to build, a 3-d model of the object, instead it predicts, directly as a function of the images, a point at which to grasp the object.
Grasping novel objects with depth segmentation
TLDR
It is shown that the task of grasping novel objects and cleaning fairly cluttered tables with many novel objects can be significantly simplified by using segmentation, especially with depth information.
Learning to grasp objects with multiple contact points
TLDR
A method to accommodate grasps with multiple contacts and a method that learns the ranking between candidates, which is highly effective compared to a state-of-the-art competitor.
Contact-reactive grasping of objects with partial shape information
TLDR
The results show that reactive grasping can correct for a fair amount of uncertainty in the measured position or shape of the objects, and that the grasp selection approach is successful in grasping objects with a variety of shapes.
Vision-based computation of three-finger grasps on unknown planar objects
TLDR
This paper presents an implemented vision-based strategy for computing three-finger stable grasps on unknown planar objects using an image of a real unknown curved object instead of synthetic polygonal models to ensure robustness and real-time performance under real-world conditions.
An SVM learning approach to robotic grasping
TLDR
This paper attempts to find optimal grasps of objects using a grasping simulator using a combination of numerical methods to recover parts of the grasp quality surface with any robotic hand, and contemporary machine learning methods to interpolate that surface, in order to find the optimal grasp.
...
1
2
3
4
5
...