Corpus ID: 237571540

CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation

@article{Wen2021CaTGraspLC,
  title={CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation},
  author={Bowen Wen and Wenzhao Lian and Kostas E. Bekris and Stefan Schaal},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.09163}
}
Task-relevant grasping is critical for industrial assembly, where downstream manipulation tasks constrain the set of valid grasps. Learning how to perform this task, however, is challenging, since task-relevant grasp labels are hard to define and annotate. There is also yet no consensus on proper representations for modeling or off-the-shelf tools for performing task-relevant grasps. This work proposes a framework to learn task-relevant grasping for industrial objects without the need of time… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 53 REFERENCES
Learning Task-Oriented Grasping for Tool Manipulation from Simulated Self-Supervision
TLDR
The Task-Oriented Grasping Network (TOG-Net) is proposed to jointly optimize both task-oriented grasping of a tool and the manipulation policy for that tool. Expand
Learning Multi-Object Dense Descriptor for Autonomous Goal-Conditioned Grasping
TLDR
An autonomous method to enable the grasping of target object in a challenging yet general scene that contains multiple objects of different classes is proposed, which can effectively learn a dense descriptor and integrate it with a newly designed grasp affordance model. Expand
Affordance detection for task-specific grasping using deep learning
TLDR
The notion of affordances to model relations between task, object and a grasp to address the problem of task-specific robotic grasping is utilized and the feasibility of this approach is demonstrated by employing an optimization-based grasp planner to compute task- specific grasps. Expand
Task-oriented grasping with semantic and geometric scene understanding
TLDR
A key element of this work is to use a deep network to integrate contextual task cues, and defer the structured-output problem of gripper pose computation to an explicit (learned) geometric model. Expand
Contact-GraspNet: Efficient 6-DoF Grasp Generation in Cluttered Scenes
TLDR
This work proposes an end-to-end network that efficiently generates a distribution of 6-DoF parallel-jaw grasps directly from a depth recording of a scene and treats 3D points of the recorded point cloud as potential grasp contacts, and reduces the dimensionality of the grasp representation to 4- doF which greatly facilitates the learning process. Expand
GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping
TLDR
This work contributes a large-scale grasp pose detection dataset with a unified evaluation system and proposes an end-to-end grasp pose prediction network given point cloud inputs, where the network learns approaching direction and operation parameters in a decoupled manner. Expand
Towards Robotic Assembly by Predicting Robust, Precise and Task-oriented Grasps
TLDR
This work proposes a method that decomposes this problem and optimizes for grasp robustness, precision, and task performance by learning three cascaded networks and evaluates the method in simulation on three common assembly tasks. Expand
S4G: Amodal Single-view Single-Shot SE(3) Grasp Detection in Cluttered Scenes
TLDR
This paper studies the problem of 6-DoF grasping by a parallel gripper in a cluttered scene captured using a commodity depth sensor from a single viewpoint and proposes a single-shot grasp proposal network, trained with synthetic data and tested in real-world scenarios. Expand
Semantic and geometric reasoning for robotic grasping: a probabilistic logic approach
TLDR
A probabilistic logic approach for robot grasping is proposed, which improves grasping capabilities by leveraging semantic object parts and provides the robot with semantic reasoning skills about the most likely object part to be grasped, given the task constraints and object properties, while also dealing with the uncertainty of visual perception and grasp planning. Expand
Task-oriented Grasping in Object Stacking Scenes with CRF-based Semantic Model
TLDR
A Conditional Random Field (CRF) is constructed to model the semantic contents in object regions and can be illustrated as incompatibility of task labels and continuity of task regions, which can greatly reduce the interference of overlaps and occlusions in object stacking scenes. Expand
...
1
2
3
4
5
...