• Corpus ID: 6040272

Sim-to-real Transfer of Visuo-motor Policies for Reaching in Clutter: Domain Randomization and Adaptation with Modular Networks

@article{Zhang2017SimtorealTO,
  title={Sim-to-real Transfer of Visuo-motor Policies for Reaching in Clutter: Domain Randomization and Adaptation with Modular Networks},
  author={Fangyi Zhang and J. Leitner and Michael Milford and Peter Corke},
  journal={ArXiv},
  year={2017},
  volume={abs/1709.05746}
}
A modular method is proposed to learn and transfer visuo-motor policies from simulation to the real world in an efficient manner by combining domain randomization and adaptation. [] Key Result The learned visuo-motor policies are robust to novel (not seen in training) objects in clutter and even a moving target, achieving a 93.3% success rate and 2.2 cm control accuracy.

Modular Deep Q Networks for Sim-to-real Transfer of Visuo-motor Policies

A modular deep reinforcement learning method capable of transferring models trained in simulation to a real-world robotic task, and introduces a bottleneck between perception and control, enabling the networks to be trained independently, but then merged and fine-tuned in an end-to-end manner to further improve hand-eye coordination.

Transferring visuomotor learning from simulation to the real world for manipulation tasks in a humanoid robot

This work solves the hand-eye coordination task using a visuomotor deep neural network predictor that estimates the arm’s joint configuration given a stereo image pair of the arm and the underlying head configuration and demonstrates that this enables accurate reaching of objects while circumventing manual finecalibration of the robot.

Transferring Visuomotor Learning from Simulation to the Real World for Robotics Manipulation Tasks

This work solves the hand-eye coordination task using a visuomotor deep neural network predictor that estimates the arm's joint configuration given a stereo image pair of the arm and the underlying head configuration and demonstrates that this enables accurate reaching of objects while circumventing manual fine-calibration of the robot.

Adversarial Feature Training for Generalizable Robotic Visuomotor Control

It is demonstrated that by using adversarial training for domain transfer, it is possible to train visuomotor policies based on RL frameworks, and then transfer the acquired policy to other novel task domains, and the method is evaluated on two real robotic tasks, picking and pouring, to demonstrate its superiority.

Mind the Gap! Bridging the Reality Gap in Visual Perception and Robotic Grasping with Domain Randomisation

This work introduces a novel DR framework for generating synthetic data in a widely popular open-source robotics simulator (Gazebo), and concludes that DR can lead to as much as 26% improvement in mAP over a fine-tuning baseline.

Real-World Robotic Perception and Control Using Synthetic Data

This thesis demonstrates that, using domain randomization, synthetic data alone can be used to train a deep neural network to localize objects accurately enough for a robot to grasp them in the real world, and proposes several promising directions for research in sim-to-real transfer for robotics.

Situational Fusion of Visual Representation for Visual Navigation

This work proposes to train an agent to fuse a large set of visual representations that correspond to diverse visual perception abilities, and develops an action-level representation fusion scheme, which predicts an action candidate from each representation and adaptively consolidate these action candidates into the final action.

Learning visual servo policies via planner cloning

PQC, a new behavior cloning algorithm, is proposed that outperforms several baselines and ablations on some challenging problems involving visual servoing in novel environments while avoiding obstacles and can be transferred effectively onto a real robotic platform.

LyRN (Lyapunov Reaching Network): A Real-Time Closed Loop approach from Monocular Vision

A key contribution of the paper is the inclusion of a first-order differential constraint associated with the cLf as a regularisation term during learning, and evidence that this leads to more robust and reliable reaching/grasping performance than vanilla regression on general control inputs.

Sim-to-Real: Autonomous Robotic Control Technical Report

A modular architecture for tackling the virtual-to-real problem, which separates the learning model into a perception module and a control policy module, and uses semantic image segmentation as the meta representation for relating these two modules.

References

SHOWING 1-10 OF 43 REFERENCES

Modular Deep Q Networks for Sim-to-real Transfer of Visuo-motor Policies

A modular deep reinforcement learning method capable of transferring models trained in simulation to a real-world robotic task, and introduces a bottleneck between perception and control, enabling the networks to be trained independently, but then merged and fine-tuned in an end-to-end manner to further improve hand-eye coordination.

Sim-to-Real Robot Learning from Pixels with Progressive Nets

This work proposes using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world, and presents an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging thereality gap.

End-to-End Training of Deep Visuomotor Policies

This paper develops a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors, trained using a partially observed guided policy search method, with supervision provided by a simple trajectory-centric reinforcement learning method.

Leveraging Deep Reinforcement Learning for Reaching Robotic Tasks

  • K. KatyalI.-J. WangP. Burlina
  • Computer Science, Psychology
    2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
  • 2017
A manipulation policy is learned which takes the first steps toward generalizing to changes in the environment and can scale and adapt to new manipulators.

Transferring End-to-End Visuomotor Control from Simulation to Real World for a Multi-Stage Task

This paper shows how two simple techniques can lead to end-to-end (image to velocity) execution of a multi-stage task, which is analogous to a simple tidying routine, without having seen a single real image.

Adapting Deep Visuomotor Representations with Weak Pairwise Constraints

This work proposes a novel domain adaptation approach for robot perception that adapts visual representations learned on a large easy-to-obtain source dataset to a target real-world domain, without requiring expensive manual data annotation of real world data before policy search.

Tuning Modular Networks with Weighted Losses for Hand-Eye Coordination

This paper introduces an end-to-end fine-tuning method to improve hand-eye coordination in modular deep visuomotor policies (modular networks) where each module is trained independently. Benefiting

Towards Vision-Based Deep Reinforcement Learning for Robotic Motion Control

This paper introduces a machine learning based system for controlling a robotic manipulator with visual perception only. The capability to autonomously learn robot controllers solely from raw-pixel

Domain randomization for transferring deep neural networks from simulation to the real world

This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator, and achieves the first successful transfer of a deep neural network trained only on simulated RGB images to the real world for the purpose of robotic control.

Learning a visuomotor controller for real world robotic grasping using simulated depth images

This paper proposes an approach to learning a closed-loop controller for robotic grasping that dynamically guides the gripper to the object and finds that this approach significantly outperforms the baseline in the presence of kinematic noise, perceptual errors and disturbances of the object during grasping.