Corpus ID: 222125281

Goal-Auxiliary Actor-Critic for 6D Robotic Grasping with Point Clouds

@article{Wang2020GoalAuxiliaryAF,
  title={Goal-Auxiliary Actor-Critic for 6D Robotic Grasping with Point Clouds},
  author={Lirui Wang and Yu Xiang and Dieter Fox},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.00824}
}
6D robotic grasping beyond top-down bin-picking scenarios is a challenging task. Previous solutions based on 6D grasp synthesis with robot motion planning usually operate in an open-loop setting without considering the dynamics and contacts of objects, which makes them sensitive to grasp synthesis errors. In this work, we propose a novel method for learning closed-loop control policies for 6D robotic grasping using point clouds from an egocentric camera. We combine imitation learning and… Expand

Figures and Tables from this paper

Hierarchical Policies for Cluttered-Scene Grasping with Latent Plans
TLDR
This work proposes a new method to close the gap through sampling and selecting plans in the latent space, and learns an embedding space to represent expert grasping plans and a variational autoencoder to sample diverse latent plans at inference time. Expand

References

SHOWING 1-10 OF 72 REFERENCES
Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach
TLDR
The proposed Generative Grasping Convolutional Neural Network (GG-CNN) predicts the quality and pose of grasps at every pixel, overcomes limitations of current deep-learning grasping techniques by avoiding discrete sampling of grasp candidates and long computation times. Expand
6-DOF GraspNet: Variational Grasp Generation for Object Manipulation
TLDR
This work forms the problem of grasp generation as sampling a set of grasps using a variational autoencoder and assess and refine the sampled graspts using a grasp evaluator model, trained purely in simulation and works in the real-world without any extra steps. Expand
QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation
TLDR
QT-Opt is introduced, a scalable self-supervised vision-based reinforcement learning framework that can leverage over 580k real-world grasp attempts to train a deep neural network Q-function with over 1.2M parameters to perform closed-loop, real- world grasping that generalizes to 96% grasp success on unseen objects. Expand
Data-Efficient Learning for Sim-to-Real Robotic Grasping using Deep Point Cloud Prediction Networks
TLDR
This paper proposes a method that learns to perform table-top instance grasping of a wide variety of objects while using no real world grasping data, outperforming the baseline using 2.5D shape by 10%. Expand
Deep Reinforcement Learning for Vision-Based Robotic Grasping: A Simulated Comparative Evaluation of Off-Policy Methods
TLDR
This paper proposes a simulated benchmark for robotic grasping that emphasizes off-policy learning and generalization to unseen objects, and indicates that several simple methods provide a surprisingly strong competitor to popular algorithms such as double Q-learning. Expand
Domain Randomization and Generative Models for Robotic Grasping
TLDR
A novel data generation pipeline for training a deep neural network to perform grasp planning that applies the idea of domain randomization to object synthesis and can achieve a >90% success rate on previously unseen realistic objects at test time in simulation despite having only been trained on random objects. Expand
Learning 6-DOF Grasping Interaction via Deep Geometry-Aware 3D Representations
TLDR
A deep geometry-aware grasping network (DGGN) that decomposes the learning into two steps, which constraining and regularizing grasping interaction learning through 3D geometry prediction and demonstrating that the model generalizes to novel viewpoints and object instances. Expand
Grasp Pose Detection in Point Clouds
TLDR
A series of robotic experiments are reported that average a 93% end-to-end grasp success rate for novel objects presented in dense clutter, an improvement in grasp detection performance. Expand
Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection
TLDR
The approach achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing, and illustrates that data from different robots can be combined to learn more reliable and effective grasping. Expand
Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics
TLDR
Experiments with over 1,000 trials on an ABB YuMi comparing grasp planning methods on singulated objects suggest that a GQ-CNN trained with only synthetic data from Dex-Net 2.0 can be used to plan grasps in 0.8sec with a success rate of 93% on eight known objects with adversarial geometry. Expand
...
1
2
3
4
5
...