Efficient Exploration in Reinforcement Learning Based on Utile Suffix Memory

@article{Pchelkin2003EfficientEI,
  title={Efficient Exploration in Reinforcement Learning Based on Utile Suffix Memory},
  author={A. Pchelkin},
  journal={Informatica},
  year={2003},
  volume={14},
  pages={237-250}
}
Reinforcement learning addresses the question of how an autonomous agent can learn to choose optimal actions to achieve its goals. Efficient exploration is of fundamental importance for autonomous agents that learn to act. Previous approaches to exploration in reinforcement learning usually address exploration in the case when the environment is fully observable. In contrast, we study the case when the environment is only partially observable. We consider different exploration techniques… Expand
9 Citations
Optimal Contraction Theorem for Exploration–Exploitation Tradeoff in Search and Optimization
  • 93
  • PDF
Exploration-exploitation tradeoffs in metaheuristics: Survey and analysis
  • 2
On the sensitivity of the neural network implementing the principal component analysis method
  • 1
International Journal INFORMATION THEORIES & APPLICATIONS
  • 4
  • Highly Influenced

References

SHOWING 1-10 OF 21 REFERENCES
Efficient Exploration in Reinforcement Learning with Hidden State
  • 20
  • Highly Influential
Greedy Utile Suffix Memory for Reinforcement Learning with Perceptually-Aliased States
  • 6
THE ROLE OF EXPLORATION IN LEARNING CONTROL
  • 161
  • Highly Influential
  • PDF
Reinforcement Learning: An Introduction
  • 28,169
  • PDF
Reinforcement Learning: A Survey
  • 6,748
  • PDF
Complexity and Cooperation in Q-Learning
  • 90
Integrated Architectures for Learning, Planning, and Reacting Based on Approximating Dynamic Programming
  • 1,454
  • Highly Influential
  • PDF
Knowledge Growth in an Artificial Animal
  • 343
Reinforcement Learning with Perceptual Aliasing: The Perceptual Distinctions Approach
  • 380
Introduction to Reinforcement Learning
  • 5,392
...
1
2
3
...