Parallel reinforcement learning with linear function approximation

@inproceedings{Grounds2007ParallelRL,
  title={Parallel reinforcement learning with linear function approximation},
  author={Matthew Jon Grounds and Daniel Kudenko},
  booktitle={AAMAS},
  year={2007}
}
In this paper, we investigate the use of parallelization in reinforcement learning (RL), with the goal of learning optimal policies for single-agent RL problems more quickly by using parallel hardware. Our approach is based on agents using the SARSA(λ) algorithm, with value functions represented using linear function approximators. In our proposed method, each agent learns independently in a separate simulation of the single-agent problem. The agents periodically exchange information extracted… CONTINUE READING

From This Paper

Figures, tables, and topics from this paper.

References

Publications referenced by this paper.
SHOWING 1-2 OF 2 REFERENCES

Similar Papers

Loading similar papers…