Learning to Model the Grasp Space of an Underactuated Robot Gripper Using Variational Autoencoder

  title={Learning to Model the Grasp Space of an Underactuated Robot Gripper Using Variational Autoencoder},
  author={Cl'ement Rolinat and Mathieu Grossard and Saifeddine Aloui and Christelle Godin},
Grasp planning and most specifically the grasp space exploration is still an open issue in robotics. This article presents a data-driven oriented methodology to model the grasp space of a multi-fingered adaptive gripper for known objects. This method relies on a limited dataset of manually specified expert grasps, and uses variational autoencoder to learn grasp intrinsic features in a compact way from a computational point of view. The learnt model can then be used to generate new non-learnt… 

Figures and Tables from this paper


Human Initiated Grasp Space Exploration Algorithm for an Underactuated Robot Gripper Using Variational Autoencoder
This article presents an efficient procedure for exploring the grasp space of a multifingered adaptive gripper for generating reliable grasps given a known object pose and reaches a grasp success rate of 99.91% on 7000 trials.
Learning From Humans How to Grasp: A Data-Driven Architecture for Autonomous Grasping With Anthropomorphic Soft Hands
This letter proposes an approach to enable soft hands to autonomously grasp objects, starting from the observations of human strategies, and extensively tested the proposed architecture with 20 objects, achieving a success rate of 81.1% over 111 grasps.
An overview of 3D object grasp synthesis algorithms
This overview presents computational algorithms for generating 3D object grasps with autonomous multi-fingered robotic hands by focusing on analytical as well as empirical grasp synthesis approaches.
Learning Object Grasping for Soft Robot Hands
The power of a 3D CNN model is exploited to estimate suitable grasp poses from multiple grasping directions (top and side directions) and wrist orientations, which has great potential for geometry-related robotic tasks.
Grasp planning in complex scenes
This paper combines grasp analysis and manipulation planning techniques to perform fast grasp planning in complex scenes and introduces a framework for finding valid grasps in cluttered environments that combines a grasp quality metric for the object with information about the local environment around the object and informationabout the robot's kinematics.
Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection
The approach achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing, and illustrates that data from different robots can be combined to learn more reliable and effective grasping.
Jacquard: A Large Scale Dataset for Robotic Grasp Detection
The results show that Jacquard enables much better generalization skills than a human labeled dataset thanks to its diversity of objects and grasping positions.
Grasp space generation using sampling and computation of independent regions
  • M. Roa, R. Suárez, J. Rosell
  • Mathematics, Computer Science
    2008 IEEE/RSJ International Conference on Intelligent Robots and Systems
  • 2008
The use of independent contact and non-graspable regions to generate the grasp space for 2D and 3D discrete objects is presented, which has several applications in manipulation and regrasping of objects, as it provides a large number of force-closure and non force- closure grasps in a short time.
Supersizing self-supervision: Learning to grasp from 50K tries and 700 robot hours
  • Lerrel Pinto, A. Gupta
  • Computer Science
    2016 IEEE International Conference on Robotics and Automation (ICRA)
  • 2016
This paper takes the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts, which allows us to train a Convolutional Neural Network for the task of predicting grasp locations without severe overfitting.
Grasp prediction and evaluation of multi-fingered dexterous hands using deep learning
Inspired by human skills, Grasp Prediction Networks (GPNs) based on Convolutional Neural Networks (CNNs) and Mixture Density Networks (MDNs) are proposed and validated that GPNs show equivalent performance as GraspIt! in terms of high-quality grasp planning.