RoboSherlock: Unstructured information processing for robot perception

@article{Beetz2015RoboSherlockUI,
  title={RoboSherlock: Unstructured information processing for robot perception},
  author={Michael Beetz and Ferenc B{\'a}lint-Bencz{\'e}di and Nico Blodow and Daniel Nyga and Thiemo Wiedemeyer and Zolt{\'a}n-Csaba M{\'a}rton},
  journal={2015 IEEE International Conference on Robotics and Automation (ICRA)},
  year={2015},
  pages={1549-1556}
}
We present RoboSherlock, an open source software framework for implementing perception systems for robots performing human-scale everyday manipulation tasks. In RoboSherlock, perception and interpretation of realistic scenes is formulated as an unstructured information management (UIM) problem. The application of the UIM principle supports the implementation of perception systems that can answer task-relevant queries about objects in a scene, boost object recognition performance by combining… 

Figures from this paper

RoboSherlock: Cognition-enabled Robot Perception for Everyday Manipulation Tasks

This work presents RoboSherlock, a knowledge-enabled cognitive perception systems for mobile robots performing human-scale everyday manipulation tasks and demonstrates the potential of the proposed framework through feasibility studies of systems for real-world scene perception that have been built on top of the framework.

Amortized Object and Scene Perception for Long-term Robot Manipulation

This paper introduces an amortized component that spreads perception tasks throughout the execution cycle and asynchronously integrates results from logged images into a symbolic and numeric representation that forms the perceptual belief state of the robot.

A Knowledge-Based Approach to Robotic Perception using Unstructured Information Management

This work proposes to demonstrate, by combining OpenEase an online framework for knowledge representation and reasoning, how knowledge processing can boosts the perception capabilities of a robotic agent performing household chores.

A Knowledge-Based Approach to Robotic Perception using Unstructured Information Management (Demonstration)

This work proposes to demonstrate, by combining these two frameworks, how knowledge processing can boosts the perception capabilities of a robotic agent performing household chores.

Managing Belief States for Service Robots : Dynamic Scene Perception and Spatio-temporal Memory

ROSHERLOCK, an open source software framework for unstructured information processing in robot perception is presented and a feasibility study of a perception system built on top of the framework is sketched that indicates the potential of the paradigm for real-world scene perception.

Continuous Visual World Modeling for Autonomous Robot Manipulation

This work presents a visual world modeling system for service robots to generate and maintain accurate models of their environments for continuous scenarios and shows that the system produces consistent perception outcomes suitable for different manipulation tasks.

Scaling perception towards autonomous object manipulation — in knowledge lies the power

This paper presents a self-adaptive robotic perception system, that acts as a planner for task aware robot manipulation and enables querying on a broad domain, through extending the existing perception framework, ROBOSHERLOCK, with the capability to adapt its perception pipelines based on the query, using knowledge-based reasoning.

Robotic Understanding of Object Semantics by Referringto a Dictionary

An approach to enable robots not only to detect objects in a scene but also to understand and reason the working environments and the applicability of the proposed method on robots is proposed.

Imagination-enabled Robot Perception

This work investigates a variation of robot perception tasks suitable for robots accomplishing everyday manipulation tasks, such as household robots or a robot in a retail store, and proposes a perception system that maintains its beliefs about its environment as a scene graph with physics simulation and visual rendering.

Towards a Framework for Visual Intelligence in Service Robotics: Epistemic Requirements and Gap Analysis

The epistemic requirements for Visual Intelligence are analyzed both in a top-down fashion, using existing frameworks for human-like Visual Intelligence in the literature, and from the bottom up, based on the errors emerging from object recognition trials in a real-world robotic scenario.
...

References

SHOWING 1-10 OF 25 REFERENCES

A Knowledge Processing Service for Robots and Robotics/AI Researchers

Using OPEN-EASE users can retrieve the memorized experiences of manipulation episodes and ask queries regarding to what the robot saw, reasoned, and did as well as how the robot did it, why, and what effects it caused.

Managing Belief States for Service Robots : Dynamic Scene Perception and Spatio-temporal Memory

ROSHERLOCK, an open source software framework for unstructured information processing in robot perception is presented and a feasibility study of a perception system built on top of the framework is sketched that indicates the potential of the paradigm for real-world scene perception.

KNOWROB — knowledge processing for autonomous personal robots

  • M. TenorthM. Beetz
  • Computer Science
    2009 IEEE/RSJ International Conference on Intelligent Robots and Systems
  • 2009
KINDROB is a first-order knowledge representation based on description logics that provides specific mechanisms and tools for action-centered representation, for the automated acquisition of grounded concepts through observation and experience, for reasoning about and managing uncertainty, and for fast inference — knowledge processing features that are particularly necessary for autonomous robot control.

Semantic Object Maps for robotic housework - representation, acquisition and use

The semantic object maps presented in this article, which is called SOM+, extend the first generation of SOMs presented by Rusu et al. in that the representation of SOM+ is designed more thoroughly and that SOM+ also include knowledge about the appearance and articulation of furniture objects.

Multi-cue 3D object recognition in knowledge-based vision-guided humanoid robot system

A design and implementation of knowledge based visual 3D object recognition system with multi-cue integration using particle filter technique and the system is able to generate vision-guided humanoid behaviors without considering visual processing functions.

Multi-modal Semantic Place Classification

A multi-modal place classification system that allows a mobile robot to identify places and recognize semantic categories in an indoor environment using a high-level cue integration scheme based on a Support Vector Machine that learns how to optimally combine and weight each cue.

PR2 looking at things — Ensemble learning for unstructured information processing with Markov logic networks

A novel combination method is proposed, which structures perception in a two-step process, and impressive categorization performance can be achieved combining the employed expert perception methods in a synergistic manner for object categorization.

Towards autonomous robotic butlers: Lessons learned with the PR2

A new task-level executive system, SMACH, based on hierarchical concurrent state machines, which controls the overall behavior of the system and integrates several new components that are built on top of the PR2's current capabilities.

BLORT-The Blocks World Robotic Vision Toolbox

The toolbox integrates state-of-the art methods for detection and learning of novel objects, and recognition and tracking of learned models and allows handling of diverse scenarios, though of course they have their own particular limitations.

Furniture Models Learned from the WWW

In this article, we investigate how autonomous robots can exploit the high quality information already available from the WWW concerning 3-D models of office furniture. Apart from the hobbyist effort