• Corpus ID: 226306649

Same Object, Different Grasps: Data and Semantic Knowledge for Task-Oriented Grasping

@inproceedings{Murali2020SameOD,
  title={Same Object, Different Grasps: Data and Semantic Knowledge for Task-Oriented Grasping},
  author={Adithyavairavan Murali and Weiyu Liu and Kenneth Marino and S. Chernova and Abhinav Kumar Gupta},
  booktitle={CoRL},
  year={2020}
}
Despite the enormous progress and generalization in robotic grasping in recent years, existing methods have yet to scale and generalize task-oriented grasping to the same extent. This is largely due to the scale of the datasets both in terms of the number of objects and tasks studied. We address these concerns with the TaskGrasp dataset which is more diverse both in terms of objects and tasks, and an order of magnitude larger than previous datasets. The dataset contains 250K task-oriented… 

GATER: Learning Grasp-Action-Target Embeddings and Relations for Task-Specific Grasping

TLDR
The proposed algorithm GATER (Grasp–Action–Target Embeddings and Relations) models the relationship among grasping tools–action–target objects in embedding space and has its potential in human behavior prediction and human-robot interaction.

Learning Object Relations with Graph Neural Networks for Target-Driven Grasping in Dense Clutter

TLDR
A target-driven grasping system that simultaneously considers object relations and predicts 6-DoF grasp poses is proposed and a shape completion-assisted grasp pose sampling method is developed that improves sample quality and consequently grasping efficiency.

Data-Driven Robotic Grasping in the Wild

TLDR
It is hypothesize that visual perception alone is insufficient for robustness and present a self-supervised tactile-based re-grasping framework to close the loop on grasp execution and strive to go beyond robotic pick-and-place and generalize to diverse semantic manipulation tasks.

Deep Learning Approaches to Grasp Synthesis: A Review

TLDR
A systematic review of publications over the last decade of robotic object grasping found four common methodologies for robotic grasping: sampling-based approaches, direct regression, reinforcement learning, and exemplar approaches, and two ‘supporting methods‘ that use deep-learning to support the grasping process, shape approximation, and affordances.

Learning to Grasp the Ungraspable with Emergent Extrinsic Dexterity

TLDR
This work trains a policy to co-optimize pre-grasp and grasping motions and results in emergent behavior of pushing the object against the wall in order to rotate and then grasp it, which demonstrates the generality of the learned policy across environment variations in simulation.

Hierarchical Representations and Explicit Memory: Learning Effective Navigation Policies on 3D Scene Graphs using Graph Neural Networks

TLDR
This work proposes a graph neural network architecture and shows how to embed a 3D scene graph into an agent-centric feature space, which enables the robot to learn policies that map3D scene graphs to a platform-agnostic control space (e.g., go straight, turn left).

Human-to-Robot Imitation in the Wild

TLDR
A simple sampling-based policy optimization approach, a novel objective function for aligning human and robot videos as well as an exploration method to boost sample efficiency and an ef ficient real-world policy learning scheme that improves using interactions are introduced.

Machine Learning for Robotic Manipulation

TLDR
This document surveys recent robotics conferences and identifies the major trends with which machine learning techniques have been applied to the challenges of robotic manipulation.

Affordance embeddings for situated language understanding

Much progress in AI over the last decade has been driven by advances in natural language processing technology, in turn facilitated by large datasets and increased computation power used to train

Learning Dexterous Manipulation from Exemplar Object Trajectories and Pre-Grasps

TLDR
The experiments validate that PGDM’s exploration strategy, induced by a surprisingly simple ingredient (single pre-grasp pose), matches the performance of prior methods, which require expen-sive per-task feature/reward engineering, expert supervision, and hyper-parameter tuning.

References

SHOWING 1-10 OF 44 REFERENCES

CAGE: Context-Aware Grasping Engine

TLDR
The Context-Aware Grasping Engine is introduced, which combines a novel semantic representation of grasp contexts with a neural network structure based on the Wide & Deep model, capable of capturing complex reasoning patterns.

Task-oriented grasping with semantic and geometric scene understanding

TLDR
A key element of this work is to use a deep network to integrate contextual task cues, and defer the structured-output problem of gripper pose computation to an explicit (learned) geometric model.

Semantic and geometric reasoning for robotic grasping: a probabilistic logic approach

TLDR
A probabilistic logic approach for robot grasping is proposed, which improves grasping capabilities by leveraging semantic object parts and provides the robot with semantic reasoning skills about the most likely object part to be grasped, given the task constraints and object properties, while also dealing with the uncertainty of visual perception and grasp planning.

Affordance detection for task-specific grasping using deep learning

TLDR
The notion of affordances to model relations between task, object and a grasp to address the problem of task-specific robotic grasping is utilized and the feasibility of this approach is demonstrated by employing an optimization-based grasp planner to compute task- specific grasps.

End-to-End Learning of Semantic Grasping

TLDR
A semantic grasping framework that learns object detection, classification, and grasp planning in an end-to-end fashion is presented and it is shown that jointly training the model with auxiliary data consisting of non-semantic grasping data, as well as semantically labeled images without grasp actions, has the potential to substantially improve semantic grasping performance.

Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching

TLDR
A robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments and that handles a wide range of object categories without needing any task-specific training data for novel objects is presented.

Learning Grasp Affordance Reasoning Through Semantic Relations

TLDR
This work uses Markov Logic Networks to build a knowledge base graph representation to obtain a probability distribution of grasp affordances for an object and defines semantics as a combination of multiple attributes, which yields benefits in terms of generalisation for grasp affordance prediction.

Learning task constraints for robot grasping using graphical models

TLDR
This paper shows how an object representation and a grasp generated on it can be integrated with the task requirements and presents a system designed to structure data generation and constraint learning processes that is applicable to new tasks, embodiments and sensory data.

Learning Human Priors for Task-Constrained Grasping

TLDR
This paper forms task based robotic grasping as a feature learning problem, using a human demonstrator to provide examples of grasps associated with a specific task, and learns a representation, such that similarity in task is reflected by similarity in feature.

Supersizing self-supervision: Learning to grasp from 50K tries and 700 robot hours

  • Lerrel PintoA. Gupta
  • Computer Science
    2016 IEEE International Conference on Robotics and Automation (ICRA)
  • 2016
TLDR
This paper takes the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts, which allows us to train a Convolutional Neural Network for the task of predicting grasp locations without severe overfitting.