Corpus ID: 232307796

MVGrasp: Real-Time Multi-View 3D Object Grasping in Highly Cluttered Environments

@article{Kasaei2021MVGraspRM,
  title={MVGrasp: Real-Time Multi-View 3D Object Grasping in Highly Cluttered Environments},
  author={Seyed Hamidreza Mohades Kasaei and Mohammadreza Mohades Kasaei},
  journal={ArXiv},
  year={2021},
  volume={abs/2103.10997}
}
Nowadays service robots are entering more and more in our daily life. In such a dynamic environment, a robot frequently faces pile, packed, or isolated objects. Therefore, it is necessary for the robot to know how to grasp and manipulate various objects in different situations to help humans in everyday tasks. Most state-of-the-art grasping approaches addressed four degrees-of-freedom (DoF) object grasping, where the robot is forced to grasp objects from above based on grasp synthesis of a… Expand

Figures and Tables from this paper

6-DOF Grasp Detection for Unknown Objects Using Surface Reconstruction
Many state-of-the-art grasping approaches are constrained to top-down grasps. Reliable robotic grasping in a human-centric environment requires considering all six degrees of freedom. We use anExpand
Lifelong 3D Object Recognition and Grasp Synthesis Using Dual Memory Recurrent Self-Organization Networks
TLDR
A hybrid model architecture consists of a dynamically growing dual-memory recurrent neural network (GDM) and an autoencoder to tackle object recognition and grasping simultaneously and addresses the problem of catastrophic forgetting using the intrinsic memory replay. Expand

References

SHOWING 1-10 OF 29 REFERENCES
High precision grasp pose detection in dense clutter
TLDR
This paper proposes two new representations of grasp candidates, and quantifies the effect of using prior knowledge of two forms: instance or category knowledge of the object to be grasped, and pretraining the network on simulated depth data obtained from idealized CAD models. Expand
Learning robust, real-time, reactive robotic grasping
TLDR
A novel approach to perform object-independent grasp synthesis from depth images via deep neural networks overcomes shortcomings in existing techniques, namely discrete sampling of grasp candidates and long computation times, and achieves better performance, particularly in clutter. Expand
QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation
TLDR
QT-Opt is introduced, a scalable self-supervised vision-based reinforcement learning framework that can leverage over 580k real-world grasp attempts to train a deep neural network Q-function with over 1.2M parameters to perform closed-loop, real- world grasping that generalizes to 96% grasp success on unseen objects. Expand
Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics
TLDR
Experiments with over 1,000 trials on an ABB YuMi comparing grasp planning methods on singulated objects suggest that a GQ-CNN trained with only synthetic data from Dex-Net 2.0 can be used to plan grasps in 0.8sec with a success rate of 93% on eight known objects with adversarial geometry. Expand
Beyond Top-Grasps Through Scene Completion
TLDR
This work presents a method that allows end-to-end top-grasp planning methods to generate full six-degree-of-freedom grasps using a single RGBD view as input, and shows statistically significant improvements in terms of grasp success rate when using simulated images over real camera images, especially when the real camera viewpoint is angled. Expand
Learning to Grasp 3D Objects using Deep Residual U-Nets
TLDR
This paper proposes an end-to-end 3D Convolutional Neural Network to predict the objects’ graspable areas and named it Res-U-Net since the architecture of the network is designed based on U-Net structure and residual network-styled blocks. Expand
Local-LDA: Open-Ended Learning of Latent Topics for 3D Object Recognition
TLDR
An open-ended 3D object recognition system which concurrently learns both the object categories and the statistical features for encoding objects, and an extension of Latent Dirichlet Allocation to learn structural semantic features for each category independently. Expand
Volumetric Grasping Network: Real-time 6 DOF Grasp Detection in Clutter
TLDR
The proposed Volumetric Grasping Network (VGN) accepts a Truncated Signed Distance Function (TSDF) representation of the scene and directly outputs the predicted grasp quality and the associated gripper orientation and opening width for each voxel in the queried 3D volume. Expand
6-DOF GraspNet: Variational Grasp Generation for Object Manipulation
TLDR
This work forms the problem of grasp generation as sampling a set of grasps using a variational autoencoder and assess and refine the sampled graspts using a grasp evaluator model, trained purely in simulation and works in the real-world without any extra steps. Expand
Multi-View Picking: Next-best-view Reaching for Improved Grasping in Clutter
TLDR
This Multi-View Picking (MVP) controller uses an active perception approach to choose informative viewpoints based directly on a distribution of grasp pose estimates in real time, reducing uncertainty in the grasp poses caused by clutter and occlusions. Expand
...
1
2
3
...