Corpus ID: 231979321

Model-Invariant State Abstractions for Model-Based Reinforcement Learning

@article{Tomar2021ModelInvariantSA,
  title={Model-Invariant State Abstractions for Model-Based Reinforcement Learning},
  author={Manan Tomar and Amy Zhang and R. Calandra and Matthew E. Taylor and Joelle Pineau},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.09850}
}
Accuracy and generalization of dynamics models is key to the success of model-based reinforcement learning (MBRL). As the complexity of tasks increases, learning dynamics models becomes increasingly sample inefficient for MBRL methods. However, many tasks also exhibit sparsity in the dynamics, i.e., actions have only a local effect on the system dynamics. In this paper, we exploit this property with a causal invariance perspective in the single-task setting, introducing a new type of state… Expand

Figures and Tables from this paper

Model-Advantage Optimization for Model-Based Reinforcement Learning
AdaRL: What, Where, and How to Adapt in Transfer Reinforcement Learning

References

SHOWING 1-10 OF 46 REFERENCES
On the model-based stochastic value gradient for continuous reinforcement learning
Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models
Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning
An algebraic approach to abstraction in reinforcement learning
Model-Ensemble Trust-Region Policy Optimization
PAC Reinforcement Learning with Rich Observations
...
1
2
3
4
5
...