Corpus ID: 224725337

Model-Based Inverse Reinforcement Learning from Visual Demonstrations

@article{Das2020ModelBasedIR,
  title={Model-Based Inverse Reinforcement Learning from Visual Demonstrations},
  author={Neha Das and Sarah Bechtle and Todor Davchev and Dinesh Jayaraman and Akshara Rai and Franziska Meier},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.09034}
}
Scaling model-based inverse reinforcement learning (IRL) to real robotic manipulation tasks with unknown dynamics remains an open problem. The key challenges lie in learning good dynamics models, developing algorithms that scale to high-dimensional state-spaces and being able to learn from both visual and proprioceptive demonstrations. In this work, we present a gradient-based inverse reinforcement learning framework that utilizes a pre-trained visual dynamics model to learn cost functions when… Expand
3 Citations

Figures and Tables from this paper

References

SHOWING 1-10 OF 33 REFERENCES
Approximate Inverse Reinforcement Learning from Vision-based Imitation Learning
  • 1
  • PDF
Learning objective functions for manipulation
  • 101
  • PDF
Deep visual foresight for planning robot motion
  • 426
  • PDF
Guided Cost Learning: Deep Inverse Optimal Control via Policy Optimization
  • 482
  • Highly Influential
  • PDF
Unsupervised Perceptual Rewards for Imitation Learning
  • 78
  • PDF
Imitation from Observation: Learning to Imitate Behaviors from Raw Video via Context Translation
  • 159
  • PDF
Inverse Reinforcement Learning with Model Predictive Control
  • 1
  • PDF
SE3-Pose-Nets: Structured Deep Dynamics Models for Visuomotor Control
  • 18
  • Highly Influential
  • PDF
Maximum Entropy Inverse Reinforcement Learning
  • 1,573
  • PDF
...
1
2
3
4
...