Memory-guided exploration in reinforcement learning

@inproceedings{Carroll2001MemoryguidedEI,
  title={Memory-guided exploration in reinforcement learning},
  author={James L. Carroll and Todd S. Peterson and Nancy E. Owens},
  year={2001}
}
The life-long learning architecture attempts to create an adaptive agent through the incorporation of prior knowledge over the lifetime of a learning agent. Our paper focuses on task transfer in reinforcement learning and specifically in Q-learning. There are three main model free methods for performing task transfer in Qlearning: direct transfer, soft transfer and memoryguided exploration. In direct transfer Q-values from a previous task are used to initialize the Q-values of the next task… CONTINUE READING
Highly Cited
This paper has 43 citations. REVIEW CITATIONS

From This Paper

Figures, tables, and topics from this paper.

Citations

Publications citing this paper.

References

Publications referenced by this paper.
Showing 1-10 of 12 references

Automated shaping as applied to robot navigation

Todd Peterson, Nancy Owens, James Carroll
In ICRA2001, • 2001
View 6 Excerpts
Highly Influenced

Lifelong Robot Learning

View 3 Excerpts
Highly Influenced

Mitchell and Sebastian Thrun . Explanation based learning : A comparison of symbolic and neural network approaches

M. Tom
International Conference on Machine Learning , pages • 2000

Transferring learned knowledge in a lifelong learning mobile robot agent

J. O’Sullivan
In 7th European Workshop on Learning Robots, • 1999
View 1 Excerpt

Multitask Learning

Machine Learning • 1997
View 1 Excerpt

Rich Caruana . Multitask learning

Kevin Dixon, Pradeep Kosla
1997