Efficient Representations of Object Geometry for Reinforcement Learning of Interactive Grasping Policies

  title={Efficient Representations of Object Geometry for Reinforcement Learning of Interactive Grasping Policies},
  author={Malte Mosbach and Sven Behnke},
  journal={2022 Sixth IEEE International Conference on Robotic Computing (IRC)},
  • Malte MosbachSven Behnke
  • Published 20 November 2022
  • Computer Science
  • 2022 Sixth IEEE International Conference on Robotic Computing (IRC)
Grasping objects of different shapes and sizes-a foundational, effortless skill for humans-remains a challenging task in robotics. Although model-based approaches can predict stable grasp configurations for known object models, they struggle to generalize to novel objects and often operate in a non-interactive open-loop manner. In this work, we present a reinforcement learning framework that learns the interactive grasping of various geometrically distinct real-world objects by continuously… 

Figures and Tables from this paper



Contextual Reinforcement Learning of Visuo-tactile Multi-fingered Grasping Policies

A Grasping Objects Approach for Tactile (GOAT) robotic hands is proposed, and a learned policy trained in simulation successfully runs on a real robot without any fine tuning, thus bridging the reality gap.

Learning Generalizable Dexterous Manipulation from Human Grasp Affordance

This paper proposes to learn dexterous manipulation using large-scale demonstrations with diverse 3D objects in a category, which are generated from a human grasp affordance model, and ablate the importance on 3D object representation learning for manipulation.

Deep Reinforcement Learning for Vision-Based Robotic Grasping: A Simulated Comparative Evaluation of Off-Policy Methods

This paper proposes a simulated benchmark for robotic grasping that emphasizes off-policy learning and generalization to unseen objects, and indicates that several simple methods provide a surprisingly strong competitor to popular algorithms such as double Q-learning.

Generalization in Dexterous Manipulation via Geometry-Aware Multi-Task Learning

It is shown that a single generalist policy can perform in-hand manipulation of over 100 geometrically-diverse realworld objects and generalize to new objects with unseen shape or size and it is found that multi-task learning with object point cloud representations not only generalizes better but even outperforms the single-object specialist policies on both training as well as held-out test objects.

Robotic Grasping using Deep Reinforcement Learning

This work uses an off-policy reinforcement learning framework along with a novel GraspQ-Network to output grasp probabilities used to learn grasps that maximize the pick success.

A Survey on Learning-Based Robotic Grasping

This review provides a comprehensive overview of machine learning approaches for vision-based robotic grasping and manipulation and gives an overview of techniques and achievements in transfers from simulations to the real world.

Volumetric Grasping Network: Real-time 6 DOF Grasp Detection in Clutter

The proposed Volumetric Grasping Network (VGN) accepts a Truncated Signed Distance Function (TSDF) representation of the scene and directly outputs the predicted grasp quality and the associated gripper orientation and opening width for each voxel in the queried 3D volume.

Robotic Grasping from Classical to Modern: A Survey

This paper surveys the advances of robotic grasping, starting from the classical formulations and solutions to the modern ones, and discusses the open problems and the future research directions that may be important for the human-level robustness, autonomy, and intelligence of robots.

Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations

This work shows that model-free DRL with natural policy gradients can effectively scale up to complex manipulation tasks with a high-dimensional 24-DoF hand, and solve them from scratch in simulated experiments.

A System for General In-Hand Object Re-Orientation

This work presents a simple model-free framework that can learn to reorient objects with both the hand facing upwards and downwards and demonstrates the capability of reorienting over 2000 geometrically different objects in both cases.