Learning to Augment Synthetic Images for Sim2Real Policy Transfer

@article{Pashevich2019LearningTA,
  title={Learning to Augment Synthetic Images for Sim2Real Policy Transfer},
  author={Alexander Pashevich and Robin Strudel and Igor Kalevatykh and Ivan Laptev and Cordelia Schmid},
  journal={2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  year={2019},
  pages={2651-2657}
}
Vision and learning have made significant progress that could improve robotics policies for complex tasks and environments. Learning deep neural networks for image understanding, however, requires large amounts of domain-specific visual data. While collecting such data from real robots is possible, such an approach limits the scalability as learning policies typically requires thousands of trials.In this work we attempt to learn manipulation policies in simulated environments. Simulators enable… 

Figures and Tables from this paper

Learning to combine primitive skills: A step towards versatile robotic manipulation §

TLDR
This work aims to overcome previous limitations and propose a reinforcement learning (RL) approach to task planning that learns to combine primitive skills and proposes an efficient training of basic skills from few synthetic demonstrations by exploring recent CNN architectures and data augmentation.

Synthetic Data for Deep Learning

  • S. Nikolenko
  • Computer Science
    Springer Optimization and Its Applications
  • 2021
TLDR
The synthetic-to-real domain adaptation problem that inevitably arises in applications of synthetic data is discussed, including synthetic- to-real refinement with GAN-based models and domain adaptation at the feature/model level without explicit data transformations.

Object Detection Using Sim2Real Domain Randomization for Robotic Applications

TLDR
A sim2real transfer learning method based on domain randomization for object detection with which labeled synthetic datasets of arbitrary size and object types can be automatically generated and matches industrial needs as it can reliably differentiate similar classes of objects by using only 1 real image for training.

Domain Generalization via Optical Flow: Training a CNN in a Low-Quality Simulation to Detect Obstacles in the Real World

TLDR
This work trains a neural network to detect collisions with simulated optical flow data and easily achieves a higher detection accuracy when compared to a network trained on a similar dataset of realworld collisions.

Combining learned skills and reinforcement learning for robotic manipulations

TLDR
This work proposes a RL policies operating on pre-trained skills, that can learn composite manipulations using no intermediate rewards and no demonstrations of full tasks, and shows successful learning of policies for composite manipulation tasks such as making a simple breakfast.

Predicting Sim-to-Real Transfer with Probabilistic Dynamics Models

TLDR
Experiments show that the transfer metric is highly correlated with policy performance in both simulated and real-world robotic environments for complex manipulation tasks and can predict the effect of training setups on policy transfer performance.

MANGA: Method Agnostic Neural-policy Generalization and Adaptation

TLDR
This work introduces MANGA: Method Agnostic Neural-policy Generalization and Adaptation, that trains dynamics conditioned policies and efficiently learns to estimate the dynamics parameters of the environment given off-policy state-transition rollouts in the environment.

Learning visual policies for building 3D shape categories

TLDR
This work proposes a disassembly procedure and learns a state policy that discovers new object instances and their assembly plans in state space and demonstrates the reactive ability of the method to re-assemble objects using additional primitives and the robust performance of the policy for unseen primitives resembling building blocks used during training.

UnrealROX: an extremely photorealistic virtual reality environment for robotics simulations and synthetic data generation

TLDR
UnrealROX is an environment built over Unreal Engine 4 which aims to reduce that reality gap by leveraging hyperrealistic indoor scenes that are explored by robot agents which also interact with objects in a visually realistic manner in that simulated world.

References

SHOWING 1-10 OF 34 REFERENCES

Asymmetric Actor Critic for Image-Based Robot Learning

TLDR
This work exploits the full state observability in the simulator to train better policies which take as input only partial observations (RGBD images) and combines this method with domain randomization and shows real robot experiments for several tasks like picking, pushing, and moving a block.

Domain randomization for transferring deep neural networks from simulation to the real world

TLDR
This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator, and achieves the first successful transfer of a deep neural network trained only on simulated RGB images to the real world for the purpose of robotic control.

Domain Randomization and Generative Models for Robotic Grasping

TLDR
A novel data generation pipeline for training a deep neural network to perform grasp planning that applies the idea of domain randomization to object synthesis and can achieve a >90% success rate on previously unseen realistic objects at test time in simulation despite having only been trained on random objects.

Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping

TLDR
This work study how randomized simulated environments and domain adaptation methods can be extended to train a grasping system to grasp novel objects from raw monocular RGB images, including a novel extension of pixel-level domain adaptation that is term the GraspGAN.

(CAD)$^2$RL: Real Single-Image Flight without a Single Real Image

TLDR
This paper proposes a learning method that they call CAD$^2$RL, which can be used to perform collision-free indoor flight in the real world while being trained entirely on 3D CAD models, and shows that it can train a policy that generalizes to thereal world, without requiring the simulator to be particularly realistic or high-fidelity.

Sim-To-Real via Sim-To-Sim: Data-Efficient Robotic Grasping via Randomized-To-Canonical Adaptation Networks

TLDR
This paper presents Randomized-to-Canonical Adaptation Networks (RCANs), a novel approach to crossing the visual reality gap that uses no real-world data and learns to translate randomized rendered images into their equivalent non-randomized, canonical versions.

AutoAugment: Learning Augmentation Policies from Data

TLDR
This paper describes a simple procedure called AutoAugment to automatically search for improved data augmentation policies, which achieves state-of-the-art accuracy on CIFAR-10, CIFar-100, SVHN, and ImageNet (without additional data).

Deep Reinforcement Learning for Robotic Manipulation

TLDR
It is demonstrated that a recent deep reinforcement learning algorithm based on offpolicy training of deep Q-functions can scale to complex 3D manipulation tasks and can learn deep neural network policies efficiently enough to train on real physical robots.

Driving Policy Transfer via Modularity and Abstraction

TLDR
This work presents an approach to transferring driving policies from simulation to reality via modularity and abstraction, inspired by classic driving systems and aims to combine the benefits of modular architectures and end-to-end deep learning approaches.

SPIGAN: Privileged Adversarial Learning from Simulation

TLDR
This work proposes anew unsupervised domain adaptation algorithm, called SPIGAN, relying on Sim-ulator Privileged Information (PI) and Generative Adversarial Networks (GAN) and uses internal data from the simulator as PI during the training of a target tasknetwork.