A Fast Learning Agent Based on the Dyna Architecture

@article{Hsu2014AFL,
  title={A Fast Learning Agent Based on the Dyna Architecture},
  author={Yuan-Pao Hsu and Wei-Cheng Jiang},
  journal={J. Inf. Sci. Eng.},
  year={2014},
  volume={30},
  pages={1807-1823}
}
In this paper, we present a rapid learning algorithm called Dyna-QPC. The proposed algorithm requires considerably less training time than Q-learning and Table-based Dyna-Q algorithm, making it applicable to real-world control tasks. The Dyna-QPC algorithm is a combination of existing learning techniques: CMAC, Q-learning, and prioritized sweeping. In a practical experiment, the Dyna-QPC algorithm is implemented with the goal of minimizing the learning time required for a robot to navigate a… CONTINUE READING

Similar Papers

Citations

Publications citing this paper.
SHOWING 1-3 OF 3 CITATIONS

Task Similarity-Based Task Allocation Approach in Multi-Agent Engineering Software Systems

  • J. Inf. Sci. Eng.
  • 2016
VIEW 10 EXCERPTS
CITES METHODS & BACKGROUND
HIGHLY INFLUENCED

Pheromone-Based Planning Strategies in Dyna-Q Learning

  • IEEE Transactions on Industrial Informatics
  • 2017
VIEW 4 EXCERPTS
CITES METHODS & BACKGROUND
HIGHLY INFLUENCED

Model Learning for Multistep Backward Prediction in Dyna- ${Q}$ Learning

  • IEEE Transactions on Systems, Man, and Cybernetics: Systems
  • 2018
VIEW 2 EXCERPTS
CITES BACKGROUND & METHODS

References

Publications referenced by this paper.
SHOWING 1-10 OF 24 REFERENCES

Technical Note: Q-Learning

VIEW 1 EXCERPT
HIGHLY INFLUENTIAL

Learning and Using Models

  • Reinforcement Learning
  • 2012
VIEW 1 EXCERPT

, and P . Stone , “ Generalized Model Learning for Reinforcement Learning on a Humanoid Robot

M. Quinlan Hester
  • Reinforcement Learning : State of the Art
  • 2011

Generalized model learning for Reinforcement Learning on a humanoid robot

  • 2010 IEEE International Conference on Robotics and Automation
  • 2010
VIEW 1 EXCERPT