Apprenticeship learning via inverse reinforcement learning
@inproceedings{Abbeel2004ApprenticeshipLV, title={Apprenticeship learning via inverse reinforcement learning}, author={P. Abbeel and A. Ng}, booktitle={ICML '04}, year={2004} }
We consider learning in a Markov decision process where we are not explicitly given a reward function, but where instead we can observe an expert demonstrating the task that we want to learn to perform. This setting is useful in applications (such as the task of driving) where it may be difficult to write down an explicit reward function specifying exactly how different desiderata should be traded off. We think of the expert as trying to maximize a reward function that is expressible as a… CONTINUE READING
Supplemental Video
Figures, Tables, and Topics from this paper
2,021 Citations
Apprenticeship learning via soft local homomorphisms
- Computer Science
- 2010 IEEE International Conference on Robotics and Automation
- 2010
- 3
- Highly Influenced
- PDF
Inverse Reinforcement Learning with Multiple Ranked Experts
- Computer Science, Mathematics
- ArXiv
- 2019
- 5
- Highly Influenced
- PDF
Inverse Reinforcement Learning from a Gradient-based Learner
- Computer Science, Mathematics
- NeurIPS
- 2020
- Highly Influenced
- PDF
References
SHOWING 1-2 OF 2 REFERENCES
Apprenticeship learning via inverse reinforcement learning
- 2004
Apprenticeship learning via inverse reinforcement learning. (Full paper.) http://www.cs.stanford.edu/~pabbeel/irl
- 2004