Corpus ID: 237635148

SIM2REALVIZ: Visualizing the Sim2Real Gap in Robot Ego-Pose Estimation

@article{Jaunet2021SIM2REALVIZVT,
  title={SIM2REALVIZ: Visualizing the Sim2Real Gap in Robot Ego-Pose Estimation},
  author={Theo Jaunet and Guillaume Bono and Romain Vuillemot and Christian Wolf},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.11801}
}
The Robotics community has started to heavily rely on increasingly realistic 3D simulators for large-scale training of robots on massive amounts of data. But once robots are deployed in the real world, the simulation gap, as well as changes in the real world (e.g. lights, objects displacements) lead to errors. In this paper, we introduce SIM2REALVIZ, a visual analytics tool to assist experts in understanding and reducing this gap for robot ego-pose estimation tasks, i. e. the estimation of a… Expand

Figures from this paper

References

SHOWING 1-10 OF 50 REFERENCES
Sim-Real Joint Reinforcement Transfer for 3D Indoor Navigation
TLDR
The method employs an adversarial feature adaptation model for visual representation transfer and a policy mimic strategy for policy behavior imitation to jointly adapt visual representation and policy behavior to leverage the mutual impacts of environment and policy. Expand
Sim-to-Real Robot Learning from Pixels with Progressive Nets
TLDR
This work proposes using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world, and presents an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging thereality gap. Expand
Sim2Real Predictivity: Does Evaluation in Simulation Predict Real-World Performance?
TLDR
The experiments show that it is possible to tune simulation parameters to improve sim2real predictivity (e.g. improving SRCC from 0.18 to 0.844) – increasing confidence that in-simulation comparisons will translate to deployed systems in reality. Expand
Adapting Deep Visuomotor Representations with Weak Pairwise Constraints
TLDR
This work proposes a novel domain adaptation approach for robot perception that adapts visual representations learned on a large easy-to-obtain source dataset to a target real-world domain, without requiring expensive manual data annotation of real world data before policy search. Expand
Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping
TLDR
This work study how randomized simulated environments and domain adaptation methods can be extended to train a grasping system to grasp novel objects from raw monocular RGB images, including a novel extension of pixel-level domain adaptation that is term the GraspGAN. Expand
Sim-To-Real via Sim-To-Sim: Data-Efficient Robotic Grasping via Randomized-To-Canonical Adaptation Networks
TLDR
This paper presents Randomized-to-Canonical Adaptation Networks (RCANs), a novel approach to crossing the visual reality gap that uses no real-world data and learns to translate randomized rendered images into their equivalent non-randomized, canonical versions. Expand
Domain randomization for transferring deep neural networks from simulation to the real world
TLDR
This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator, and achieves the first successful transfer of a deep neural network trained only on simulated RGB images to the real world for the purpose of robotic control. Expand
Multi-Task Domain Adaptation for Deep Learning of Instance Grasping from Simulation
TLDR
A multi-task domain adaptation framework for instance grasping in cluttered scenes by utilizing simulated robot experiments and uses a domain-adversarial loss to transfer the trained model to real robots using indiscriminate grasping data, which is available both in simulation and the real world. Expand
On the Limits of Pseudo Ground Truth in Visual Camera Re-localisation
TLDR
This paper analyzes two widely used re-localisation datasets and shows that evaluation outcomes indeed vary with the choice of the reference algorithm, and question common beliefs in the re- localisation literature, namely that learning-based scene coordinate regression outperforms classical feature-based methods, and that RGB-Dbased methods outperform RGB- based methods. Expand
PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization
TLDR
This work trains a convolutional neural network to regress the 6-DOF camera pose from a single RGB image in an end-to-end manner with no need of additional engineering or graph optimisation, demonstrating that convnets can be used to solve complicated out of image plane regression problems. Expand
...
1
2
3
4
5
...