Unsupervised Perceptual Rewards for Imitation Learning

  title={Unsupervised Perceptual Rewards for Imitation Learning},
  author={Pierre Sermanet and Kelvin Xu and Sergey Levine},
Reward function design and exploration time are arguably the biggest obstacles to the deployment of reinforcement learning (RL) agents in the real world. In many real-world tasks, designing a reward function takes considerable hand engineering and often requires additional and potentially visible sensors to be installed just to measure whether the task has been executed successfully. Furthermore, many interesting tasks consist of multiple implicit intermediate steps that must be executed in… CONTINUE READING
Highly Cited
This paper has 52 citations. REVIEW CITATIONS
Recent Discussions
This paper has been referenced on Twitter 54 times over the past 90 days. VIEW TWEETS

From This Paper

Figures, tables, results, connections, and topics extracted from this paper.
20 Extracted Citations
36 Extracted References
Similar Papers

Citing Papers

Publications influenced by this paper.
Showing 1-10 of 20 extracted citations

52 Citations

Citations per Year
Semantic Scholar estimates that this publication has 52 citations based on the available data.

See our FAQ for additional information.

Referenced Papers

Publications referenced by this paper.
Showing 1-10 of 36 references

Unsupervised perceptual rewards for imitation learning

  • Pierre Sermanet, Kelvin Xu, Sergey Levine
  • CoRR, abs/1612.06699,
  • 2016
10 Excerpts

Similar Papers

Loading similar papers…