Simitate: A Hybrid Imitation Learning Benchmark

@article{Memmesheimer2019SimitateAH,
  title={Simitate: A Hybrid Imitation Learning Benchmark},
  author={Raphael Memmesheimer and Ivanna Mykhalchyshyna and Viktor Seib and Dietrich Paulus},
  journal={2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  year={2019},
  pages={5243-5249}
}
We present Simitate — a hybrid benchmarking suite targeting the evaluation of approaches for imitation learning. A dataset containing 1938 sequences where humans perform daily activities in a realistic environment is presented. The dataset is strongly coupled with an integration into a simulator. RGB and depth streams with a resolution of $960 \times 540$ at 30Hz and accurate ground truth poses for the demonstrator’s hand, as well as the object in 6 DOF at 120Hz are provided. Along with our… 

Figures and Tables from this paper

What Matters in Learning from Offline Human Demonstrations for Robot Manipulation

This study analyzes the most critical challenges when learning from offline human data for manipulation and highlights opportunities for learning from human datasets, such as the ability to learn proficient policies on challenging, multi-stage tasks beyond the scope of current reinforcement learning methods.

The MAGICAL Benchmark for Robust Imitation

Using the MAGICAL suite, it is confirmed that existing IL algorithms overfit significantly to the context in which demonstrations are provided, and it is suggested that new approaches will be needed in order to robustly generalise demonstrator intent.

DERAIL: Diagnostic Environments for Reward And Imitation Learning

This work develops a suite of simple diagnostic tasks that test individual facets of algorithm performance in isolation, and confirms that algorithm performance is highly sensitive to implementation details.

RLDS: an Ecosystem to Generate, Share and Use Datasets in Reinforcement Learning

We introduce RLDS (Reinforcement Learning Datasets), an ecosystem for recording, replaying, manipulating, annotating and sharing data in the context of Sequential Decision Making (SDM) including

DeepClaw: A Robotic Hardware Benchmarking Platform for Learning Object Manipulation*

This work proposes a hierarchical pipeline of software integration, including localization, recognition, grasp planning, and motion planning, to streamline learning-based robot control, data collection, and experiment validation towards shareability and reproducibility.

Adaptive Learning Methods for Autonomous Mobile Manipulation in RoboCup@Home

Team homer@UniKoblenz describes their approaches with a special focus on their demonstration of this year’s RoboCup@Home finals, which includes semantic exploration, adaptive programming by demonstration and touch enforcing manipulation.

SL-DML: Signal Level Deep Metric Learning for Multimodal One-Shot Action Recognition

This work proposes a metric learning approach to reduce the action recognition problem to a nearest neighbor search in embedding space, which generalizes well in experiments on the UTD-MHAD dataset for inertial, skeleton and fused data and the Simitate dataset for motion capturing data.

Gimme Signals: Discriminative signal encoding for multimodal activity recognition

The focus was to find an approach that generalizes well across different sensor modalities without specific adaptions while still achieving good results, and defines the current best CNN-based approach on the NTU RGB+D 120 dataset.

A unified approach for action recognition on various data modalities

It is shown that the approaches generalizes well across 5 different data modalities and achieves comparable results on 4 public available datasets and the MMAct challenge dataset.

Knowledge Acquisition and Reasoning Systems for Service Robots: A Short Review of the State of the Art

  • P. K. PrasadW. Ertel
  • Computer Science
    2020 5th International Conference on Robotics and Automation Engineering (ICRAE)
  • 2020
This paper aims to deliver a review of the state of the art artificial intelligence approaches taken by researchers to enable robots to acquire and process knowledge of their environment and situation leading to robotic awareness by compiling the modern techniques used in this field.

References

SHOWING 1-10 OF 51 REFERENCES

One-Shot High-Fidelity Imitation: Training Large-Scale Deep Nets with RL

The MetaMimic algorithm is introduced, to the best of the knowledge, the largest existing neural networks for deep RL and shows that larger networks with normalization are needed to achieve one-shot high-fidelity imitation on a challenging manipulation task.

One-Shot Imitation Learning

A meta-learning framework for achieving one-shot imitation learning, where ideally, robots should be able to learn from very few demonstrations of any given task, and instantly generalize to new situations of the same task, without requiring task-specific engineering.

One-Shot Imitation from Observing Humans via Domain-Adaptive Meta-Learning

This work presents an approach for one-shot learning from a video of a human by using human and robot demonstration data from a variety of previous tasks to build up prior knowledge through meta-learning, then combining this prior knowledge and only a single video demonstration from a human, the robot can perform the task that the human demonstrated.

An Algorithmic Perspective on Imitation Learning

This work provides an introduction to imitation learning, dividing imitation learning into directly replicating desired behavior and learning the hidden objectives of the desired behavior from demonstrations (called inverse optimal control or inverse reinforcement learning [Russell, 1998]).

Deep Imitation Learning for Complex Manipulation Tasks from Virtual Reality Teleoperation

It is described how consumer-grade Virtual Reality headsets and hand tracking hardware can be used to naturally teleoperate robots to perform complex tasks and how imitation learning can learn deep neural network policies that can acquire the demonstrated skills.

End-to-End Driving Via Conditional Imitation Learning

This work evaluates different architectures for conditional imitation learning in vision-based driving and conducts experiments in realistic three-dimensional simulations of urban driving and on a 1/5 scale robotic truck that is trained to drive in a residential area.

Evaluation of robot imitation attempts: comparison of the system's and the human's perspectives

A series of experimental runs and a small pilot user study were conducted to evaluate the performance of a system designed for robot imitation and suggest that there is a good alignment between this quantitive system centered assessment and the more qualitative human-centered assessment of imitative performance.

Are we ready for autonomous driving? The KITTI vision benchmark suite

The autonomous driving platform is used to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection, revealing that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world.

Towards Weakly-Supervised Action Localization

An effective method for extracting human tubes by combining a state-of-the-art human detector with a tracking-by-detection approach is introduced and a new realistic dataset for action localization, named DALY, which is an order of magnitude larger than standard action localization datasets.

Query-Efficient Imitation Learning for End-to-End Autonomous Driving

An extension of the DAgger, called SafeDAgger, is proposed that is query-efficient and more suitable for end-to-end autonomous driving and observes a significant speed up in convergence, which is conjecture to be due to the effect of automated curriculum learning.
...