Multi-Fingered Active Grasp Learning

@article{Lu2020MultiFingeredAG,
  title={Multi-Fingered Active Grasp Learning},
  author={Qingkai Lu and Mark Van der Merwe and Tucker Hermans},
  journal={2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  year={2020},
  pages={8415-8422}
}
Learning-based approaches to grasp planning are preferred over analytical methods due to their ability to better generalize to new, partially observed objects. However, data collection remains one of the biggest bottlenecks for grasp learning methods, particularly for multi-fingered hands. The relatively high dimensional configuration space of the hands coupled with the diversity of objects common in daily life requires a significant number of samples to produce robust and confident grasp… 

Figures and Tables from this paper

Exploratory Grasping: Performance Bounds and Asymptotically Optimal Algorithms for Learning to Robustly Grasp an Unknown Polyhedral Object

This work formalizes the problem of efficiently exploring grasps on an unknown polyhedral object through sequential interaction as a Markov Decision Process in a setting where a camera can be used to distinguish stable poses and determine grasp success/failure and presents a bandit-style algorithm, Exploratory Grasping, which leverages the structure of the grasp exploration problem to rapidly find high performing grAsps on new objects through online interaction.

Multi-FinGAN: Generative Coarse-To-Fine Sampling of Multi-Finger Grasps

This work presents Multi-FinGAN, a fast generative multi-finger grasp sampling method that synthesizes high quality grasps directly from RGB-D images in about a second, a significant improvement that opens the door to feedback-based grasp re-planning and task informative grasping.

Simultaneous Tactile Exploration and Grasp Refinement for Unknown Objects

This letter proposes a grasp exploration approach using a probabilistic representation of shape, based on Gaussian Process Implicit Surfaces, which enables initial partial vision data to be augmented with additional data from successive tactile glances to refine grasp configurations.

Online Body Schema Adaptation through Cost-Sensitive Active Learning

This work proposes a movement efficient approach for estimating online the body-schema of a humanoid robot arm in the form of Denavit-Hartenberg parameters, and shows cost-sensitive active learning has similar accuracy to the standard active learning approach, while reducing in about half the executed movement.

Review of Deep Reinforcement Learning-Based Object Grasping: Techniques, Open Challenges, and Recommendations

This comprehensive review of deep reinforcement learning in the manipulation field may be valuable for researchers and practitioners because they can expedite the establishment of important guidelines.

Comparing Piezoresistive Substrates for Tactile Sensing in Dexterous Hands

This work uses a high density foam substrate to develop a scalable tactile skin that can be attached to the palm of a robotic hand and demonstrates its ability to reliably detect and localize contact, as well as analyze contact patterns during grasping and transport tasks.

Deep Learning Approaches to Grasp Synthesis: A Review

A systematic review of publications over the last decade of robotic object grasping found four common methodologies for robotic grasping: sampling-based approaches, direct regression, reinforcement learning, and exemplar approaches, and two ‘supporting methods‘ that use deep-learning to support the grasping process, shape approximation, and affordances.

Planning Visual-Tactile Precision Grasps via Complementary Use of Vision and Touch

This work proposes an approach to grasp planning that explicitly reasons about where the fingertips should contact the estimated object surface while maximizing the probability of grasp success, and successfully synthesises and executes precision grasps for previously unseen objects using surface estimates from a single camera view.

A Two-stage Learning Architecture that Generates High-Quality Grasps for a Multi-Fingered Hand

This work investigates the problem of planning stable grasps for object manipulations using an 18-DOF robotic hand with four fingers using a novel two-stage learning process and devise a Bayesian Optimization scheme for the palm pose and a physics-based grasp pose metric to rate stablegrasps.

Multi-Finger Grasping Like Humans

This study proposes a novel optimization-based approach for transferring human grasp demonstrations to any multi-fingered grippers, which produces robotic grasps that mimic the human hand orientation and the contact area with the object, while alleviating interpenetration.

References

SHOWING 1-10 OF 47 REFERENCES

Modeling Grasp Type Improves Learning-Based Grasp Planning

This paper proposes a probabilistic grasp planner that explicitly models grasp type for planning high-quality precision and power grasps in real time and shows the benefit of learning a prior over grasp configurations to improve grasp inference with a learned classifier.

Multi-Fingered Grasp Planning via Inference in Deep Neural Networks

This work is the first to directly plan high quality multi-fingered grasps in configuration space using a deep neural network without the need of an external planner and outperforms existing grasp planning methods for neural networks.

Planning Multi-Fingered Grasps as Probabilistic Inference in a Learned Deep Network

This work is the first to directly plan high quality multifingered grasps in configuration space using a deep neural network without the need of an external planner and shows that the planning method outperforms existing planning methods for neural networks.

Experiments with Hierarchical Reinforcement Learning of Multiple Grasping Policies

A framework for hierarchical reinforcement learning of grasping policies is presented, and the experimental results show that the approach learns multiple grasping policies and generalizes the learned grasps by using local point cloud information.

Visual detection of opportunities to exploit contact in grasping using contextual multi-armed bandits

  • Clemens EppnerO. Brock
  • Computer Science
    2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
  • 2017
This planner exploits the advantages of a soft robot hand and learns a hand-specific classifier for edge-, surface-, and wall-grasps, each exploiting a different EC.

Multifingered Grasp Planning via Inference in Deep Neural Networks: Outperforming Sampling by Learning Differentiable Models

This work is the first to directly plan high-quality multifingered grasps in configuration space using a DNN without the need of an external planner and outperforms existing grasp-planning methods for neural networks (NNs).

Active Learning-Based Grasp for Accurate Industrial Manipulation

An active learning-based grasp method for accurate industrial manipulation that combines the high accuracy of geometrically driven grasp methods and the generalization ability of data-driven grasp methods, and simplifies the deployment process.

Learning Grasp Strategies with Partial Shape Information

An approach to grasping is proposed that estimates the stability of different grasps, given only noisy estimates of the shape of visible portions of an object, such as that obtained from a depth sensor.

High precision grasp pose detection in dense clutter

This paper proposes two new representations of grasp candidates, and quantifies the effect of using prior knowledge of two forms: instance or category knowledge of the object to be grasped, and pretraining the network on simulated depth data obtained from idealized CAD models.

A Billion Ways to Grasp: An Evaluation of Grasp Sampling Schemes on a Dense, Physics-based Grasp Data Set

This paper review, classify, and compare different grasp sampling strategies, based on a fine-grained discretization of SE(3) and uses physics-based simulation to evaluate the quality and robustness of the corresponding parallel-jaw grasps.