Research on task decomposition and state abstraction in reinforcement learning

@article{Yu2011ResearchOT,
  title={Research on task decomposition and state abstraction in reinforcement learning},
  author={Lasheng Yu and Zhongbin Jiang and Kang Liu},
  journal={Artificial Intelligence Review},
  year={2011},
  volume={38},
  pages={119-127}
}
Task decomposition and State abstraction are crucial parts in reinforcement learning. It allows an agent to ignore aspects of its current states that are irrelevant to its current decision, and therefore speeds up dynamic programming and learning. This paper presents the SVI algorithm that uses a dynamic Bayesian network model to construct an influence graph that indicates relationships between state variables. SVI performs state abstraction for each subtask by ignoring irrelevant state… CONTINUE READING
4 Citations
10 References
Similar Papers

References

Publications referenced by this paper.
Showing 1-10 of 10 references

Exploiting structure in policy construction

  • DP Bertsekas, JN Tsitsiklis
  • IJCAI
  • 1996
1 Excerpt

A model for reasoning about persistence and causation

  • T 1104–1113 Dean, K Kanazawa
  • Comput Intell
  • 1989
1 Excerpt

Similar Papers

Loading similar papers…