Optimal Regret Bounds for Selecting the State Representation in Reinforcement Learning

@inproceedings{Maillard2013OptimalRB,
  title={Optimal Regret Bounds for Selecting the State Representation in Reinforcement Learning},
  author={Odalric-Ambrym Maillard and Phuong Nguyen and Ronald Ortner and Daniil Ryabko},
  booktitle={ICML},
  year={2013}
}
We consider an agent interacting with an environment in a single stream of actions, observations, and rewards, with no reset. This process is not assumed to be a Markov Decision Process (MDP). Rather, the agent has several representations (mapping histories of past interactions to a discrete state space) of the environment with unknown dynamics, only some of which result in an MDP. The goal is to minimize the average regret criterion against an agent who knows an MDP representation giving the… CONTINUE READING
Highly Cited
This paper has 26 citations. REVIEW CITATIONS

References

Publications referenced by this paper.
Showing 1-10 of 13 references

Similar Papers

Loading similar papers…