Gaussian Processes for Sample Efficient Reinforcement Learning with RMAX-Like Exploration

@inproceedings{Jung2010GaussianPF,
  title={Gaussian Processes for Sample Efficient Reinforcement Learning with RMAX-Like Exploration},
  author={Tobias Jung and Peter Stone},
  booktitle={ECML/PKDD},
  year={2010}
}
  • Tobias Jung, Peter Stone
  • Published in ECML/PKDD 2010
  • Mathematics, Computer Science
  • We present an implementation of model-based online reinforcement learning (RL) for continuous domains with deterministic transitions that is specifically designed to achieve low sample complexity. To achieve low sample complexity, since the environment is unknown, an agent must intelligently balance exploration and exploitation, and must be able to rapidly generalize from observations. While in the past a number of related sample efficient RL algorithms have been proposed, to allow theoretical… CONTINUE READING

    Create an AI-powered research feed to stay up to date with new papers like this posted to ArXiv

    Citations

    Publications citing this paper.
    SHOWING 1-10 OF 26 CITATIONS

    Adaptive Sparse Grids in Reinforcement Learning

    VIEW 10 EXCERPTS
    CITES METHODS & BACKGROUND
    HIGHLY INFLUENCED

    Metric learning for reinforcement learning agents

    VIEW 6 EXCERPTS
    CITES BACKGROUND
    HIGHLY INFLUENCED

    Online Learning in Kernelized Markov Decision Processes

    VIEW 1 EXCERPT
    CITES BACKGROUND

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 19 REFERENCES

    Reinforcement Learning: An Introduction

    VIEW 7 EXCERPTS
    HIGHLY INFLUENTIAL

    Multi-resolution Exploration in Continuous Spaces

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL

    Tree-Based Batch Mode Reinforcement Learning

    VIEW 5 EXCERPTS
    HIGHLY INFLUENTIAL

    Online least-squares policy iteration for reinforcement learning control

    VIEW 2 EXCERPTS

    Learning RoboCup-Keepaway with Kernels

    VIEW 2 EXCERPTS