• Corpus ID: 222125281

Goal-Auxiliary Actor-Critic for 6D Robotic Grasping with Point Clouds

  title={Goal-Auxiliary Actor-Critic for 6D Robotic Grasping with Point Clouds},
  author={Lirui Wang and Yu Xiang and Dieter Fox},
6D robotic grasping beyond top-down bin-picking scenarios is a challenging task. Previous solutions based on 6D grasp synthesis with robot motion planning usually operate in an open-loop setting without considering the dynamics and contacts of objects, which makes them sensitive to grasp synthesis errors. In this work, we propose a novel method for learning closed-loop control policies for 6D robotic grasping using point clouds from an egocentric camera. We combine imitation learning and… 

Figures and Tables from this paper

Hierarchical Policies for Cluttered-Scene Grasping with Latent Plans
A hierarchical framework learns collision-free target-driven grasping based on partial point cloud observations and an embedding space to encode expert grasping plans during training and a variational autoencoder to sample diverse grasping trajectories at test time.


Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach
The proposed Generative Grasping Convolutional Neural Network (GG-CNN) predicts the quality and pose of grasps at every pixel, overcomes limitations of current deep-learning grasping techniques by avoiding discrete sampling of grasp candidates and long computation times.
6-DOF GraspNet: Variational Grasp Generation for Object Manipulation
This work forms the problem of grasp generation as sampling a set of grasps using a variational autoencoder and assess and refine the sampled graspts using a grasp evaluator model, trained purely in simulation and works in the real-world without any extra steps.
QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation
QT-Opt is introduced, a scalable self-supervised vision-based reinforcement learning framework that can leverage over 580k real-world grasp attempts to train a deep neural network Q-function with over 1.2M parameters to perform closed-loop, real- world grasping that generalizes to 96% grasp success on unseen objects.
Data-Efficient Learning for Sim-to-Real Robotic Grasping using Deep Point Cloud Prediction Networks
This paper proposes a method that learns to perform table-top instance grasping of a wide variety of objects while using no real world grasping data, outperforming the baseline using 2.5D shape by 10%.
Deep Reinforcement Learning for Vision-Based Robotic Grasping: A Simulated Comparative Evaluation of Off-Policy Methods
This paper proposes a simulated benchmark for robotic grasping that emphasizes off-policy learning and generalization to unseen objects, and indicates that several simple methods provide a surprisingly strong competitor to popular algorithms such as double Q-learning.
Domain Randomization and Generative Models for Robotic Grasping
A novel data generation pipeline for training a deep neural network to perform grasp planning that applies the idea of domain randomization to object synthesis and can achieve a >90% success rate on previously unseen realistic objects at test time in simulation despite having only been trained on random objects.
Learning 6-DOF Grasping Interaction via Deep Geometry-Aware 3D Representations
A deep geometry-aware grasping network (DGGN) that decomposes the learning into two steps, which constraining and regularizing grasping interaction learning through 3D geometry prediction and demonstrating that the model generalizes to novel viewpoints and object instances.
Grasp Pose Detection in Point Clouds
A series of robotic experiments are reported that average a 93% end-to-end grasp success rate for novel objects presented in dense clutter, an improvement in grasp detection performance.
Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection
The approach achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing, and illustrates that data from different robots can be combined to learn more reliable and effective grasping.
Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics
Experiments with over 1,000 trials on an ABB YuMi comparing grasp planning methods on singulated objects suggest that a GQ-CNN trained with only synthetic data from Dex-Net 2.0 can be used to plan grasps in 0.8sec with a success rate of 93% on eight known objects with adversarial geometry.