• Mathematics, Computer Science
  • Published in ArXiv 2018

Dynamic Planning Networks

@article{Tasfi2018DynamicPN,
  title={Dynamic Planning Networks},
  author={Norman L. Tasfi and Miriam A. M. Capretz},
  journal={ArXiv},
  year={2018},
  volume={abs/1812.11240}
}
We introduce Dynamic Planning Networks (DPN), a novel architecture for deep reinforcement learning, that combines model-based and model-free aspects for online planning. Our architecture learns to dynamically construct plans using a learned state-transition model by selecting and traversing between simulated states and actions to maximize information before acting. In contrast to model-free methods, model-based planning lets the agent efficiently test action hypotheses without performing costly… CONTINUE READING
53
Twitter Mentions

Citations

Publications citing this paper.

The Differentiable Cross-Entropy Method

VIEW 1 EXCERPT
CITES BACKGROUND

References

Publications referenced by this paper.
SHOWING 1-10 OF 33 REFERENCES

Imagination-Augmented Agents for Deep Reinforcement Learning

VIEW 6 EXCERPTS
HIGHLY INFLUENTIAL

https://github.com/oxwhirl/treeqn/ blob/master/treeqn/envs/push.py”, 2017b

  • G. Farquhar, T. Rocktäschel, M. Igl, S. Whiteson
  • 2017
VIEW 14 EXCERPTS
HIGHLY INFLUENTIAL

Asynchronous Methods for Deep Reinforcement Learning

VIEW 5 EXCERPTS
HIGHLY INFLUENTIAL

Human-level control through deep reinforcement learning

VIEW 6 EXCERPTS
HIGHLY INFLUENTIAL

Learning to Search with MCTSnets

VIEW 1 EXCERPT

Learning model-based planning from scratch

VIEW 2 EXCERPTS