Deep learning for detecting robotic grasps

@article{Lenz2015DeepLF,
  title={Deep learning for detecting robotic grasps},
  author={Ian Lenz and Honglak Lee and Ashutosh Saxena},
  journal={The International Journal of Robotics Research},
  year={2015},
  volume={34},
  pages={705 - 724}
}
We consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects. [] Key Method In order to make detection fast and robust, we present a two-step cascaded system with two deep networks, where the top detections from the first are re-evaluated by the second. The first network has fewer features, is faster to run, and can effectively prune out unlikely candidate grasps. The second, with more features, is slower but has to run only on the top few detections.
A New Approach Based on Two-stream CNNs for Novel Objects Grasping in Clutter
TLDR
A deep learning approach is applied to solve the problem about grasping novel objects in clutter by proposing a ‘grasp circle’ method to find more potential grasps in each sampling point with less cost, which is parameterized by the size of the gripper.
Robot grasp detection using multimodal deep convolutional neural networks
TLDR
A novel robot grasp detection system that maps a pair of RGB-D images of novel objects to best grasping pose of a robotic gripper and presents a two-stage closed-loop grasping candidate estimator to accelerate the searching efficiency of grasping-candidate generation.
Dictionary Learning for Robotic Grasp Recognition and Detection
TLDR
This work proposes a dictionary learning and sparse representation (DLSR) framework for representing RGBD images from 3D sensors in the context of determining such good grasping locations and shows a performance improvement over current state-of-the-art convolutional neural network (CNN).
Sparse Dictionary Learning for Identifying Grasp Locations
TLDR
A dictionary learning and sparse representation (DLSR) framework for representing RGBD images from 3D sensors in the context of identifying grasping locations and a comparative study of several DLSR approach combinations for recognizing and detecting grasp candidates on the standard Cornell dataset is presented.
Grasping of Unknown Objects Using Deep Convolutional Neural Networks Based on Depth Images
TLDR
The approach is able to handle full end-effector poses and therefore approach directions other than the view direction of the camera, and is not limited to a certain grasping setup (e. g. parallel jaw gripper) by design.
DemoGrasp: Few-Shot Learning for Robotic Grasping with Human Demonstration
TLDR
This work proposes to teach a robot how to grasp an object with a simple and short human demonstration, and transfers the a-priori knowledge from the relative pose between object and human hand with the estimate of the current object pose in the scene into necessary grasping instructions for the robot.
A Cascaded Deep Learning Framework for Real-time and Robust Grasp Planning*
TLDR
A cascaded deep learning framework, which consists of a regression CNN and a refined CNN, which ensures real-time performance and acquisition of adequate candidates synchronously, which is especially appropriate for unstructured environment manipulation.
Pick and Place Objects in a Cluttered Scene Using Deep Reinforcement Learning
TLDR
The key feature of the system is that it handles both primitive actions of picking and placing of objects with an explicit framework using raw RGB-D images, and the contribution of this paper is to model such a complete manipulation system with reasonable computational complexity.
Fast Convolutional Neural Network for Real-Time Robotic Grasp Detection
TLDR
This work addresses the visual perception phase involved in the robotic grasping problem by obtaining a grasping rectangle that symbolizes the position, orientation, and opening of the robot's parallel grippers just before the grippers are closed.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 100 REFERENCES
Learning Grasp Strategies with Partial Shape Information
TLDR
An approach to grasping is proposed that estimates the stability of different grasps, given only noisy estimates of the shape of visible portions of an object, such as that obtained from a depth sensor.
Efficient grasping from RGBD images: Learning using a new rectangle representation
TLDR
This work proposes a new ‘grasping rectangle’ representation: an oriented rectangle in the image plane that takes into account the location, the orientation as well as the gripper opening width and shows that this algorithm is successfully used to pick up a variety of novel objects.
Robotic Grasping of Novel Objects using Vision
TLDR
This work considers the problem of grasping novel objects, specifically objects that are being seen for the first time through vision, and presents a learning algorithm that neither requires nor tries to build a 3-d model of the object.
Robotic Grasping of Novel Objects
TLDR
This work presents a learning algorithm that neither requires, nor tries to build, a 3-d model of the object, instead it predicts, directly as a function of the images, a point at which to grasp the object.
Physics-Based Grasp Planning Through Clutter
TLDR
Validation on a real robot shows that the grasp evaluation method accurately predicts the outcome of a grasp, and that the approach, in conjunction with state-of-the-art object recognition tools, is applicable in reallife scenes that are highly cluttered and constrained.
Learning to grasp objects with multiple contact points
TLDR
A method to accommodate grasps with multiple contacts and a method that learns the ranking between candidates, which is highly effective compared to a state-of-the-art competitor.
Learning and Evaluation of the Approach Vector for Automatic Grasp Generation and Planning
  • S. Ekvall, D. Kragic
  • Computer Science
    Proceedings 2007 IEEE International Conference on Robotics and Automation
  • 2007
In this paper, we address the problem of automatic grasp generation for robotic hands where experience and shape primitives are used in synergy so to provide a basis not only for grasp generation but
A Framework for Push-Grasping in Clutter
TLDR
This work introduces a framework for planning in clutter that uses a library of actions inspired by human strategies that succeeds where traditional grasp planners fail, and works under high uncertainty by utilizing the funneling effect of pushing.
Visual grasp affordances from appearance-based cues
TLDR
A general framework for estimating grasp affordances from 2-D sources is developed, including local texture-like measures as well as object-category measures that capture previously learned grasp strategies.
Learning object-specific grasp affordance densities
TLDR
The result of learning grasp hypothesis densities from both imitation and visual cues are shown, and grasp empirical densities learned from physical experience by a robot are presented.
...
1
2
3
4
5
...