Learning to Model the Grasp Space of an Underactuated Robot Gripper Using Variational Autoencoder

@article{Rolinat2021LearningTM,
  title={Learning to Model the Grasp Space of an Underactuated Robot Gripper Using Variational Autoencoder},
  author={Cl'ement Rolinat and Mathieu Grossard and Saifeddine Aloui and Christelle Godin},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.08504}
}
Grasp planning and most specifically the grasp space exploration is still an open issue in robotics. This article presents a data-driven oriented methodology to model the grasp space of a multi-fingered adaptive gripper for known objects. This method relies on a limited dataset of manually specified expert grasps, and uses variational autoencoder to learn grasp intrinsic features in a compact way from a computational point of view. The learnt model can then be used to generate new non-learnt… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 27 REFERENCES
Human Initiated Grasp Space Exploration Algorithm for an Underactuated Robot Gripper Using Variational Autoencoder
TLDR
This article presents an efficient procedure for exploring the grasp space of a multifingered adaptive gripper for generating reliable grasps given a known object pose and reaches a grasp success rate of 99.91% on 7000 trials. Expand
Learning From Humans How to Grasp: A Data-Driven Architecture for Autonomous Grasping With Anthropomorphic Soft Hands
TLDR
This letter proposes an approach to enable soft hands to autonomously grasp objects, starting from the observations of human strategies, and extensively tested the proposed architecture with 20 objects, achieving a success rate of 81.1% over 111 grasps. Expand
An overview of 3D object grasp synthesis algorithms
TLDR
This overview presents computational algorithms for generating 3D object grasps with autonomous multi-fingered robotic hands by focusing on analytical as well as empirical grasp synthesis approaches. Expand
Learning Object Grasping for Soft Robot Hands
TLDR
The power of a 3D CNN model is exploited to estimate suitable grasp poses from multiple grasping directions (top and side directions) and wrist orientations, which has great potential for geometry-related robotic tasks. Expand
Grasp planning in complex scenes
TLDR
This paper combines grasp analysis and manipulation planning techniques to perform fast grasp planning in complex scenes and introduces a framework for finding valid grasps in cluttered environments that combines a grasp quality metric for the object with information about the local environment around the object and informationabout the robot's kinematics. Expand
Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection
TLDR
The approach achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing, and illustrates that data from different robots can be combined to learn more reliable and effective grasping. Expand
Jacquard: A Large Scale Dataset for Robotic Grasp Detection
TLDR
The results show that Jacquard enables much better generalization skills than a human labeled dataset thanks to its diversity of objects and grasping positions. Expand
Grasp space generation using sampling and computation of independent regions
  • M. Roa, R. Suárez, J. Rosell
  • Mathematics, Computer Science
  • 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems
  • 2008
TLDR
The use of independent contact and non-graspable regions to generate the grasp space for 2D and 3D discrete objects is presented, which has several applications in manipulation and regrasping of objects, as it provides a large number of force-closure and non force- closure grasps in a short time. Expand
Supersizing self-supervision: Learning to grasp from 50K tries and 700 robot hours
TLDR
This paper takes the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts, which allows us to train a Convolutional Neural Network for the task of predicting grasp locations without severe overfitting. Expand
Grasp prediction and evaluation of multi-fingered dexterous hands using deep learning
TLDR
Inspired by human skills, Grasp Prediction Networks (GPNs) based on Convolutional Neural Networks (CNNs) and Mixture Density Networks (MDNs) are proposed and validated that GPNs show equivalent performance as GraspIt! in terms of high-quality grasp planning. Expand
...
1
2
3
...