• Corpus ID: 237635148

SIM2REALVIZ: Visualizing the Sim2Real Gap in Robot Ego-Pose Estimation

  title={SIM2REALVIZ: Visualizing the Sim2Real Gap in Robot Ego-Pose Estimation},
  author={Theo Jaunet and Guillaume Bono and Romain Vuillemot and Christian Wolf},
The Robotics community has started to heavily rely on increasingly realistic 3D simulators for large-scale training of robots on massive amounts of data. But once robots are deployed in the real-world, the simulation gap, as well as changes in the real-world (e.g. lights, objects displacements) lead to errors. In this paper, we introduce S IM 2R EAL V IZ , a visual analytics tool to assist experts in understanding and reducing this gap for robot ego-pose estimation tasks, i. e. the estimation… 

Figures from this paper


Sim-Real Joint Reinforcement Transfer for 3D Indoor Navigation
The method employs an adversarial feature adaptation model for visual representation transfer and a policy mimic strategy for policy behavior imitation to jointly adapt visual representation and policy behavior to leverage the mutual impacts of environment and policy.
Sim-to-Real Robot Learning from Pixels with Progressive Nets
This work proposes using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world, and presents an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging thereality gap.
Sim2Real Predictivity: Does Evaluation in Simulation Predict Real-World Performance?
The experiments show that it is possible to tune simulation parameters to improve sim2real predictivity (e.g. improving SRCC from 0.18 to 0.844) – increasing confidence that in-simulation comparisons will translate to deployed systems in reality.
Adapting Deep Visuomotor Representations with Weak Pairwise Constraints
This work proposes a novel domain adaptation approach for robot perception that adapts visual representations learned on a large easy-to-obtain source dataset to a target real-world domain, without requiring expensive manual data annotation of real world data before policy search.
Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping
This work study how randomized simulated environments and domain adaptation methods can be extended to train a grasping system to grasp novel objects from raw monocular RGB images, including a novel extension of pixel-level domain adaptation that is term the GraspGAN.
Sim-To-Real via Sim-To-Sim: Data-Efficient Robotic Grasping via Randomized-To-Canonical Adaptation Networks
This paper presents Randomized-to-Canonical Adaptation Networks (RCANs), a novel approach to crossing the visual reality gap that uses no real-world data and learns to translate randomized rendered images into their equivalent non-randomized, canonical versions.
Domain randomization for transferring deep neural networks from simulation to the real world
This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator, and achieves the first successful transfer of a deep neural network trained only on simulated RGB images to the real world for the purpose of robotic control.
Multi-Task Domain Adaptation for Deep Learning of Instance Grasping from Simulation
A multi-task domain adaptation framework for instance grasping in cluttered scenes by utilizing simulated robot experiments and uses a domain-adversarial loss to transfer the trained model to real robots using indiscriminate grasping data, which is available both in simulation and the real world.
On the Limits of Pseudo Ground Truth in Visual Camera Re-localisation
This paper analyzes two widely used re-localisation datasets and shows that evaluation outcomes indeed vary with the choice of the reference algorithm, and question common beliefs in the re- localisation literature, namely that learning-based scene coordinate regression outperforms classical feature-based methods, and that RGB-D- based methods outperform RGB-based Methods.
PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization
This work trains a convolutional neural network to regress the 6-DOF camera pose from a single RGB image in an end-to-end manner with no need of additional engineering or graph optimisation, demonstrating that convnets can be used to solve complicated out of image plane regression problems.