RoboCSE: Robot Common Sense Embedding
@article{Daruna2019RoboCSERC, title={RoboCSE: Robot Common Sense Embedding}, author={Angel Andres Daruna and Weiyu Liu and Zsolt Kira and S. Chernova}, journal={2019 International Conference on Robotics and Automation (ICRA)}, year={2019}, pages={9777-9783} }
Autonomous service robots require computational frameworks that allow them to generalize knowledge to new situations in a manner that models uncertainty while scaling to real-world problem sizes. The Robot Common Sense Embedding (RoboCSE) showcases a class of computational frameworks, multi-relational embeddings, that have not been leveraged in robotics to model semantic knowledge. We validate RoboCSE on a realistic home environment simulator (AI2Thor) to measure how well it generalizes learned…Â
24 Citations
Learning Instance-Level N-Ary Semantic Knowledge At Scale For Robots Operating in Everyday Environments
- Computer ScienceRobotics: Science and Systems
- 2021
This work proposes a transformer neural network that directly generalizes knowledge from observations of object instances that obtains a 10% improvement in predicting unknown properties of novel object instances while reducing training and inference time by 150 times.
Towards Robust One-shot Task Execution using Knowledge Graph Embeddings
- Computer Science2021 IEEE International Conference on Robotics and Automation (ICRA)
- 2021
This work addresses the problem of one-shot task execution, in which a robot must generalize a single demonstration or prototypical example of a task plan to a new execution environment, and integrates task plans with domain knowledge to infer task plan constituents for new execution environments.
Learning Embeddings that Capture Spatial Semantics for Indoor Navigation
- Computer ScienceArXiv
- 2021
This work studies how object embeddings that capture spatial semantic priors can guide search and navigation task in a structured environment and proposes a method to incorporate such spatial semantic awareness in robots by leveraging pre-trained language models and multirelational knowledge bases as object embeddeddings.
Explainable Knowledge Graph Embedding: Inference Reconciliation for Knowledge Inferences Supporting Robot Actions
- Computer ScienceArXiv
- 2022
Results from the simulated robot evaluation indicate that the pedagogical approach used to explain the inferences of a learned, black-box knowledge graph representation, a knowledge graph embedding enable non-experts to correct erratic robot behaviors due to nonsensical beliefs within the black- box.
Continual Learning of Knowledge Graph Embeddings
- Computer ScienceIEEE Robotics and Automation Letters
- 2021
Through an experimental evaluation with several knowledge graphs and embedding representations, this work provides insights about trade-offs for practitioners to match a semantics-driven robotics applications to a suitable continual knowledge graph embedding method.
Fit to Measure: Reasoning about Sizes for Robust Object Recognition
- Computer ScienceAAAI Spring Symposium: Combining Machine Learning with Knowledge Engineering
- 2021
This paper hypothesises that knowledge of the typical size of objects could significantly improve the accuracy of an object recognition system, and presents an approach to integrating knowledge about object sizes in a ML based architecture.
GATER: Learning Grasp-Action-Target Embeddings and Relations for Task-Specific Grasping
- Computer ScienceIEEE Robotics and Automation Letters
- 2022
The proposed algorithm GATER (Grasp–Action–Target Embeddings and Relations) models the relationship among grasping tools–action–target objects in embedding space and has its potential in human behavior prediction and human-robot interaction.
Towards Explainable Embodied AI
- Computer Science
- 2021
The proposed explainability methods for embodied AI facilitate the analysis of policy failure cases in different out-of-distribution scenarios and conclude that embodied AI policies can be understood with feature attributions to explain how input state features influence the predicted actions.
Leveraging Semantics for Incremental Learning in Multi-Relational Embeddings
- Computer ScienceArXiv
- 2019
This work presents Incremental Semantic Initialization (ISI), an incremental learning approach that enables novel semantic concepts to be initialized in the embedding in relation to previously learned embeddings of semantically similar concepts.
Robots With Commonsense: Improving Object Recognition Through Size and Spatial Awareness
- Computer ScienceAAAI Spring Symposium: MAKE
- 2022
A novel method to equip DL-based object recognition with the ability to reason on the typical size and spatial relations of objects and shows that the proposed hybrid architecture significantly outperforms DL-only solutions.
References
SHOWING 1-10 OF 27 REFERENCES
RoboBrain: Large-Scale Knowledge Engine for Robots
- Computer ScienceArXiv
- 2014
A knowledge engine, which learns and shares knowledge representations, for robots to carry out a variety of tasks, is introduced and its use in three important research areas: grounding natural language, perception, and planning, which are the key building blocks for many robotic tasks.
Situated Bayesian Reasoning Framework for Robots Operating in Diverse Everyday Environments
- Computer ScienceISRR
- 2017
This paper presents an approach for automatically generating a compact semantic knowledge base, relevant to a robot’s particular operating environment, given only a small number of object labels obtained from object recognition or a robot's task description, as a statistical relational model represented as a Baysian Logic Network.
Visual Semantic Planning Using Deep Successor Representations
- Computer Science2017 IEEE International Conference on Computer Vision (ICCV)
- 2017
This work addresses the problem of visual semantic planning: the task of predicting a sequence of actions from visual observations that transform a dynamic environment from an initial state to a goal state, and develops a deep predictive model based on successor representations.
KnowRob: A knowledge processing infrastructure for cognition-enabled robots
- Computer ScienceInt. J. Robotics Res.
- 2013
This article introduces the KnowRob knowledge processing system, a system specifically designed to provide autonomous robots with the knowledge needed for performing everyday manipulation tasks, and evaluates the system’s scalability and present different integrated experiments that show its versatility and comprehensiveness.
Know Rob 2.0 — A 2nd Generation Knowledge Processing Framework for Cognition-Enabled Robotic Agents
- Computer Science2018 IEEE International Conference on Robotics and Automation (ICRA)
- 2018
Novel features and extensions of KnowRob2 substantially increase the capabilities of robotic agents of acquiring open-ended manipulation skills and competence, reasoning about how to perform manipulation actions more realistically, and acquiring commonsense knowledge.
ORO, a knowledge management platform for cognitive architectures in robotics
- Computer Science2010 IEEE/RSJ International Conference on Intelligent Robots and Systems
- 2010
An embeddable knowledge processing framework, along with a common-sense ontology, designed for robotics, that enables in turn reasoning and the implementation of other advanced cognitive functions like events, categorization, memory management and reasoning on parallel cognitive models.
Robobarista: Object Part Based Transfer of Manipulation Trajectories from Crowd-Sourcing in 3D Pointclouds
- Computer ScienceISRR
- 2015
A novel approach to manipulation planning based on the idea that many household objects share similarly-operated object parts is presented and a deep learning model is designed that can handle large noise in the manipulation demonstrations and learns features from three different modalities: point- clouds, language and trajectory.
KNOWROB-MAP - knowledge-linked semantic object maps
- Computer Science2010 10th IEEE-RAS International Conference on Humanoid Robots
- 2010
This paper presents KNOWROB-MAP, a system for building environment models for robots by combining spatial information about objects in the environment with encyclopedic knowledge about the types and properties of objects, with common-sense knowledge describing what the objects can be used for.
Reasoning about Object Affordances in a Knowledge Base Representation
- Computer ScienceECCV
- 2014
This work learns a knowledge base (KB) using a Markov Logic Network (MLN) and shows that a diverse set of visual inference tasks can be done in this unified framework without training separate classifiers, including zero-shot affordance prediction and object recognition given human poses.
Analogical Inference for Multi-relational Embeddings
- Computer ScienceICML
- 2017
This paper proposes a novel framework for optimizing the latent representations with respect to thealogical properties of the embedded entities and relations, and offers an elegant unification of several well-known methods in multi-relational embedding.