On Assessing the Usefulness of Proxy Domains for Developing and Evaluating Embodied Agents

@article{Courchesne2021OnAT,
  title={On Assessing the Usefulness of Proxy Domains for Developing and Evaluating Embodied Agents},
  author={Anthony Courchesne and Andrea Censi and Liam Paull},
  journal={2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  year={2021},
  pages={4298-4305}
}
In many situations it is either impossible or impractical to develop and evaluate agents entirely on the target domain on which they will be deployed. This is particularly true in robotics, where doing experiments on hardware is much more arduous than in simulation. This has become arguably more so in the case of learning-based agents. To this end, considerable recent effort has been devoted to developing increasingly realistic and higher fidelity simulators. However, we lack any principled way… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 44 REFERENCES
A Data-Efficient Framework for Training and Sim-to-Real Transfer of Navigation Policies
TLDR
This work introduces a robust framework that plans in simulation and transfers well to the real environment, consisting of the encoder and planner modules, and shows successful planning performances in different navigation tasks.
Transfer from Simulation to Real World through Learning Deep Inverse Dynamics Model
TLDR
This paper investigates settings, where the sequence of states traversed in simulation remains reasonable for the real world, even if the details of the controls are not, as could be the case when the key differences lie in detailed friction, contact, mass and geometry properties.
Are We Making Real Progress in Simulated Environments? Measuring the Sim2Real Gap in Embodied Visual Navigation
TLDR
The Habitat-PyRobot Bridge is developed, a library for seamless execution of identical code on a simulated agent and a physical robot and a new metric called Sim-vs-Real Correlation Coefficient (SRCC) is presented to quantify sim2real predictivity, which is largely due to AI agents learning to 'cheat' by exploiting simulator imperfections.
Sim2Real Predictivity: Does Evaluation in Simulation Predict Real-World Performance?
TLDR
The experiments show that it is possible to tune simulation parameters to improve sim2real predictivity (e.g. improving SRCC from 0.18 to 0.844) – increasing confidence that in-simulation comparisons will translate to deployed systems in reality.
Reinforcement learning with multi-fidelity simulators
TLDR
This framework is designed to limit the number of samples used in each successively higher-fidelity/cost simulator by allowing the agent to choose to run trajectories at the lowest level that will still provide it with information.
One-Shot Imitation from Observing Humans via Domain-Adaptive Meta-Learning
TLDR
This work presents an approach for one-shot learning from a video of a human by using human and robot demonstration data from a variety of previous tasks to build up prior knowledge through meta-learning, then combining this prior knowledge and only a single video demonstration from a human, the robot can perform the task that the human demonstrated.
Integrated Benchmarking and Design for Reproducible and Accessible Evaluation of Robotic Agents
TLDR
This paper describes a new concept for reproducible robotics research that integrates development and benchmarking, so that reproducibility is obtained "by design" from the beginning of the research/development processes and builds a concrete instance: the DUCKIENet.
Stochastic Grounded Action Transformation for Robot Learning in Simulation
  • Josiah P. Hanna, P. Stone
  • Computer Science
    2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
  • 2020
TLDR
The Stochastic Grounded Action Transformation (SGAT) algorithm is introduced, which models this stochasticity when grounding the simulator, and it is found experimentally—for both simulated and physical target domains—that SGAT can find policies that are robust to stoChasticity in the target domain.
Sim-to-Real Transfer with Neural-Augmented Robot Simulation
TLDR
This work introduces a method for training a recurrent neural network on the differences between simulated and real robot trajectories and then using this model to augment the simulator, which can be used to learn control policies that transfer significantly better to real environments than policies learned on existing simulators.
Policy Transfer with Strategy Optimization
TLDR
This paper presents a different approach that leverages domain randomization for transferring control policies to unknown environments and demonstrates that this method can overcome larger modeling errors compared to training a robust policy or an adaptive policy.
...
...