Tile Coding Based on Hyperplane Tiles

@inproceedings{Loiacono2008TileCB,
  title={Tile Coding Based on Hyperplane Tiles},
  author={Daniele Loiacono and Pier Luca Lanzi},
  booktitle={EWRL},
  year={2008}
}
In large and continuous state-action spaces reinforcement learning heavily relies on function approximation techniques. Tile coding is a well-known function approximator that has been successfully applied to many reinforcement learning tasks. In this paper we introduce the hyperplane tile coding, in which the usual tiles are replaced by parameterized hyperplanes that approximate the action-value function. We compared the performance of hyperplane tile coding with the usual tile coding on three… 
A .NET REINFORCEMENT LEARNING PLATFORM FOR MULTIAGENT SYSTEMS
TLDR
The reinforcement platform presented here is especially designed to be used with the .NET framework and provides a general support for developing solutions for reinforcement learning problems.

References

SHOWING 1-10 OF 13 REFERENCES
Function Approximation via Tile Coding: Automating Parameter Choice
TLDR
This paper demonstrates that the performance of tile coding is quite sensitive to parameterization, and demonstrates that no single parameterization achieves the best performance throughout the learning curve, and contributes an automated technique for adjusting tile-coding parameters online.
Generalization in Reinforcement Learning: Successful Examples Using Sparse Coarse Coding
TLDR
It is concluded that reinforcement learning can work robustly in conjunction with function approximators, and that there is little justification at present for avoiding the case of general λ.
Generalization in Reinforcement Learning: Safely Approximating the Value Function
TLDR
Grow-Support is introduced, a new algorithm which is safe from divergence yet can still reap the benefits of successful generalization, and which is not robust, and in even very benign cases, may produce an entirely wrong policy.
Comparison of CMACs and radial basis functions for local function approximators in reinforcement learning
TLDR
This work examines the similarities and differences between CMACs, RBFs and normalized RBFs, and compares the performance of Q-learning with each representation applied to the mountain car problem.
Reinforcement Learning: An Introduction
TLDR
This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
TD-Gammon: A Self-Teaching Backgammon Program
TLDR
This chapter describes TD-Gammon, a neural network that is able to teach itself to play backgammon solely by playing against itself and learning from the results, and is apparently the first application of this algorithm to a complex nontrivial task.
Introduction to Reinforcement Learning
TLDR
In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning.
Gain Adaptation Beats Least Squares
I present computational results suggesting that gainadaptation algorithms based in part on connectionist learning methods may improve over least squares and other classical parameter-estimation
Abstraction, Reformulation and Approximation, 6th International Symposium, SARA 2005, Airth Castle, Scotland, UK, July 26-29, 2005, Proceedings
Full Papers.- Verifying the Incorrectness of Programs and Automata.- Generating Admissible Heuristics by Abstraction for Search in Stochastic Domains.- Synthesizing Plans for Multiple Domains.-
Adaptive Filter Theory
Background and Overview. 1. Stochastic Processes and Models. 2. Wiener Filters. 3. Linear Prediction. 4. Method of Steepest Descent. 5. Least-Mean-Square Adaptive Filters. 6. Normalized
...
1
2
...