• Corpus ID: 220302248

Towards Generalization and Data Efficient Learning of Deep Robotic Grasping

@article{Chen2020TowardsGA,
  title={Towards Generalization and Data Efficient Learning of Deep Robotic Grasping},
  author={Zhixin Chen and Mengxiang Lin and Zhixin Jia and Shibo Jian},
  journal={ArXiv},
  year={2020},
  volume={abs/2007.00982}
}
Deep reinforcement learning (DRL) has been proven to be a powerful paradigm for learning complex control policy autonomously. Numerous recent applications of DRL in robotic grasping have successfully trained DRL robotic agents end-to-end, mapping visual inputs into control instructions directly, but the amount of training data required may hinder these applications in practice. In this paper, we propose a DRL based robotic visual grasping framework, in which visual perception and control policy… 

Figures and Tables from this paper

Robotic Grasping from Classical to Modern: A Survey
TLDR
This paper surveys the advances of robotic grasping, starting from the classical formulations and solutions to the modern ones, and discusses the open problems and the future research directions that may be important for the human-level robustness, autonomy, and intelligence of robots.

References

SHOWING 1-10 OF 31 REFERENCES
Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates
TLDR
It is demonstrated that a recent deep reinforcement learning algorithm based on off-policy training of deep Q-functions can scale to complex 3D manipulation tasks and can learn deep neural network policies efficiently enough to train on real physical robots.
End-to-End Training of Deep Visuomotor Policies
TLDR
This paper develops a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors, trained using a partially observed guided policy search method, with supervision provided by a simple trajectory-centric reinforcement learning method.
Dexterous Manipulation with Deep Reinforcement Learning: Efficient, General, and Low-Cost
TLDR
It is shown that contact-rich manipulation behavior with multi-fingered hands can be learned by directly training with model-free deep RL algorithms in the real world, with minimal additional assumption and without the aid of simulation, indicating that direct deep RL training in thereal world is a viable and practical alternative to simulation and model-based control.
Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection
TLDR
The approach achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing, and illustrates that data from different robots can be combined to learn more reliable and effective grasping.
Deep Object-Centric Representations for Generalizable Robot Learning
TLDR
This paper proposes using an object-centric prior and a semantic feature space for the perception system of a learned policy that can be used to determine relevant objects from a few trajectories or demonstrations, and then immediately incorporate those objects into a learning policy.
Deep spatial autoencoders for visuomotor learning
TLDR
This work presents an approach that automates state-space construction by learning a state representation directly from camera images by using a deep spatial autoencoder to acquire a set of feature points that describe the environment for the current task, such as the positions of objects.
Supersizing self-supervision: Learning to grasp from 50K tries and 700 robot hours
  • Lerrel Pinto, A. Gupta
  • Computer Science
    2016 IEEE International Conference on Robotics and Automation (ICRA)
  • 2016
TLDR
This paper takes the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts, which allows us to train a Convolutional Neural Network for the task of predicting grasp locations without severe overfitting.
Generalizing Skills with Semi-Supervised Reinforcement Learning
TLDR
This paper formalizes the problem as semisupervised reinforcement learning, where the reward function can only be evaluated in a set of "labeled" MDPs, and the agent must generalize its behavior to the wide range of states it might encounter in aSet of "unlabeled", by using experience from both settings.
Continuous Deep Q-Learning with Model-based Acceleration
TLDR
This paper derives a continuous variant of the Q-learning algorithm, which it is called normalized advantage functions (NAF), as an alternative to the more commonly used policy gradient and actor-critic methods, and substantially improves performance on a set of simulated robotic control tasks.
Domain randomization for transferring deep neural networks from simulation to the real world
TLDR
This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator, and achieves the first successful transfer of a deep neural network trained only on simulated RGB images to the real world for the purpose of robotic control.
...
1
2
3
4
...