Center-of-Mass-based Robust Grasp Planning for Unknown Objects Using Tactile-Visual Sensors

@article{Feng2020CenterofMassbasedRG,
  title={Center-of-Mass-based Robust Grasp Planning for Unknown Objects Using Tactile-Visual Sensors},
  author={Qian Feng and Zhaopeng Chen and Jun Deng and Chunhui Gao and Jianwei Zhang and Alois Knoll},
  journal={2020 IEEE International Conference on Robotics and Automation (ICRA)},
  year={2020},
  pages={610-617}
}
An unstable grasp pose can lead to slip, thus an unstable grasp pose can be predicted by slip detection. A regrasp is required afterwards to correct the grasp pose in order to finish the task. In this work, we propose a novel regrasp planner with multi-sensor modules to plan grasp adjustments with the feedback from a slip detector. Then a regrasp planner is trained to estimate the location of center of mass, which helps robots find an optimal grasp pose. The dataset in this work consists of 1… 

Figures and Tables from this paper

Center-of-Mass-based Robust Grasp Pose Adaptation Using RGBD Camera and Force/Torque Sensing

. Object dropping may occur when the robotic arm grasps objects with uneven mass distribution due to additional moments generated by objects’ gravity. To solve this problem, we present a novel work

Self-Correction for Eye-In-Hand Robotic Grasping using Action Learning

It is demonstrated that the action learning-based object manipulation system with stereo-like vision and eye-in-hand calibration can improve intelligence over previous errors with acceptable errors, and might be applicable to other object manipulation systems without having to define the environment first.

Combining Sensors Information to Enhance Pneumatic Grippers Performance

It is demonstrated that, through the use of force, torque, center of pressure and proximity information, the behavior of the developed pneumatic gripper prototype outperforms the one of the traditional pnematic gripping devices.

Visuo-Tactile Feedback-Based Robot Manipulation for Object Packing

A new visuo-tactile feedback-based manipulation planning framework for object packing is proposed, which makes use of the on-the-fly multisensory feedback and an attention-guided deep affordance model as perceptual states as well as a deep reinforcement learning (DRL) pipeline.

Friction Variability and Sensing Capabilities for Data-Driven Slip Detection

—Reliable slip detection enables stable grasping in unstructured environments and controlled motion in manipulation. The coefficient of friction is highly variable, and often non- linear, depending

References

SHOWING 1-10 OF 56 REFERENCES

Grasp stability assessment through unsupervised feature learning of tactile images

A novel way to improve robotic grasping is presented: by using tactile sensors and an unsupervised feature-learning approach, a robot can find the common denominators behind successful and failed grasps, and use this knowledge to predict whether a grasp attempt will succeed or fail.

Learning to Grasp Without Seeing

This is the first attempt at learning to grasp with only tactile sensing and without any prior object knowledge and significantly boosts the overall accuracy when applied on top of a vision-based policy.

Grip Stabilization of Novel Objects Using Slip Prediction

This article forms a supervised-learning problem to predict the future occurrence of slip from high-dimensional tactile information provided by a BioTac sensor, and demonstrates how different input features, slip prediction time horizons, and available tactile information channels, impact prediction accuracy.

Deep learning a grasp function for grasping under gripper pose uncertainty

A new method for parallel-jaw grasping of isolated objects from depth images, under large gripper pose uncertainty, which trains a Convolutional Neural Network which takes as input a single depth image of an object, and outputs a score for each grasp pose across the image.

PointNetGPD: Detecting Grasp Configurations from Point Sets

Experiments on object grasping and clutter removal show that the proposed PointNetGPD model generalizes well to novel objects and outperforms state-of-the-art methods.

Deep learning for detecting robotic grasps

This work presents a two-step cascaded system with two deep networks, where the top detections from the first are re-evaluated by the second, and shows that this method improves performance on an RGBD robotic grasping dataset, and can be used to successfully execute grasps on two different robotic platforms.

Review of Deep Learning Methods in Robotic Grasp Detection

The current state-of-the-art in regards to the application of deep learning methods to generalised robotic grasping is reviewed and how each element of the deep learning approach has improved the overall performance of robotic grasp detection is discussed.

Tactile Regrasp: Grasp Adjustments via Simulated Tactile Transformations

A novel regrasp control policy that makes use of tactile sensing to plan local grasp adjustments by virtually searching for local transformations of tactile measurements that improve the quality of the grasp is presented.

Stabilizing novel objects by learning to predict tactile slip

This work explores the generalization capabilities of well known supervised learning methods, using random forest classifiers to create generalizable slip predictors in the feedback loop of an object stabilization controller and shows that the controller can successfully stabilize previously unknown objects by predicting and counteracting slip events.

Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching

A robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments and that handles a wide range of object categories without needing any task-specific training data for novel objects is presented.
...