Successor Features for Transfer in Reinforcement Learning

@inproceedings{Barreto2017SuccessorFF,
  title={Successor Features for Transfer in Reinforcement Learning},
  author={Andr{\'e} Barreto and R{\'e}mi Munos and Tom Schaul and David Silver},
  booktitle={NIPS},
  year={2017}
}
Transfer in reinforcement learning refers to the notion that generalization should occur not only within a task but also across tasks. We propose a transfer framework for the scenario where the reward function changes between tasks but the environment’s dynamics remain the same. Our approach rests on two key ideas: successor features, a value function representation that decouples the dynamics of the environment from the rewards, and generalized policy improvement, a generalization of dynamic… CONTINUE READING
Highly Cited
This paper has 53 citations. REVIEW CITATIONS
Recent Discussions
This paper has been referenced on Twitter 46 times over the past 90 days. VIEW TWEETS
35 Citations
33 References
Similar Papers

Citations

Publications citing this paper.
Showing 1-10 of 35 extracted citations

54 Citations

02040201620172018
Citations per Year
Semantic Scholar estimates that this publication has 54 citations based on the available data.

See our FAQ for additional information.

Similar Papers

Loading similar papers…