Function Approximation With XCS: Hyperellipsoidal Conditions, Recursive Least Squares, and Compaction

@article{Butz2008FunctionAW,
  title={Function Approximation With XCS: Hyperellipsoidal Conditions, Recursive Least Squares, and Compaction},
  author={Martin Volker Butz and Pier Luca Lanzi and Stewart W. Wilson},
  journal={IEEE Transactions on Evolutionary Computation},
  year={2008},
  volume={12},
  pages={355-376}
}
An important strength of learning classifier systems (LCSs) lies in the combination of genetic optimization techniques with gradient-based approximation techniques. The chosen approximation technique develops locally optimal approximations, such as accurate classification estimates, Q-value predictions, or linear function approximations. The genetic optimization technique is designed to distribute these local approximations efficiently over the problem space. Together, the two components… 
Function approximation with LWPR and XCSF: a comparative study
TLDR
This study compares two non-parametric regression algorithms that are able to approximate multi-dimensional, non-linear functions online and explores the trade-off between those criteria on three benchmark problems by means of intense grid search for Pareto optimal solutions.
A comparative study: function approximation with LWPR and XCSF
TLDR
This work compares two algorithms that are able to approximate multi-dimensional, non-linear functions online and Locally Weighted Projection Regression (LWPR) is a statistics-based machine learning technique that is widely used for function approximation, particularly in robotics.
Guided evolution in XCSF
High-dimensional problems are challenging for iterative, online (Michigan-style) Learning Classifier Systems, especially because of the large size of the evolutionary search space. The present work
Optimality-Based Analysis of XCSF Compaction in Discrete Reinforcement Learning
TLDR
This work applies XCSF to a deterministic and stochastic variant of the FrozenLake8x8 environment from OpenAI Gym, with its performance compared in terms of function approximation error and policy accuracy to the optimal Q-functions and policies produced by solving the environments via dynamic programming.
What about interpolation?: a radial basis function approach to classifier prediction modeling in XCSF
TLDR
A novel approach to model a classifier's payoff prediction using Radial Basis Function interpolation as a new means to capture the underlying function surface complexity and results reveal that the RBF-based classifier prediction outperforms the n-th order polynomial approximation on several test functions of varying complexity.
Interpolation-based classifier generation in XCSF
TLDR
It is shown that by means of the proposed strategies for integrating the locally interpolated values, the overall performance of XCSF can be improved.
Modularization of xcsf for multiple output dimensions
TLDR
It is shown that the more local linearity structures differ, the more a modularized approximation by multiple XCSF instances pays off, and if modularization is not applied, the problem complexity may increase exponentially in the number of approximately orthogonally-structured output dimensions.
An adaption mechanism for the error threshold of XCSF
TLDR
To enable XCSF to reliably approximate functions unknown at design-time, this work proposes the use of an error threshold, which is adapted at run-time based on the currently achieved approximation error.
A pareto following variation operator for fast-converging multiobjective evolutionary algorithms
TLDR
This work introduces a simple approach that has comparatively smaller computational cost and develops this model as a variation operator that can be used in any kind of multiobjective optimizer, and compares the performance of the algorithm with respect to the total number of function evaluation and Hypervolume metric.
High-dimensional Function Optimisation by Reinforcement Learning
TLDR
Simulation studies show that FORL, using a smaller number of FEs, offers better performance in finding accurate solutions, in particular for high-dimensional multimodal function optimisation problems.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 74 REFERENCES
A formal framework and extensions for function approximation in learning classifier systems
TLDR
The formalizations in this paper act as the foundation of a currently actively developed formal framework that includes all three LCS components, promising a better formal understanding of current LCS and the development of better LCS algorithms.
Kernel-based, ellipsoidal conditions in the real-valued XCS classifier system
TLDR
It is shown that the modifications of the XCS classifier conditions to hyperspheres and hyperellipsoids yield improved performance in continuous functions and shows that XCS is readily applicable in diverse problem domains.
Generalization in the XCSF Classifier System: Analysis, Improvement, and Extension
TLDR
It is shown that the types of generalizations evolved by XCSF can be influenced by the input range and that while all three approaches significantly improve X CSF, least squares approaches appear to be best performing and most robust.
Analysis and Improvement of Fitness Exploitation in XCS: Bounding Models, Tournament Selection, and Bilateral Accuracy
TLDR
This paper investigates how, when, and where accuracy-based fitness results in successful rule evolution in XCS, and introduces improvements to XCS to make fitness pressure more robust and overcome the fitness dilemma.
Classifiers that approximate functions
A classifier system, XCSF, is introduced in which the predictionestimation mechanism is used to learn approximations to functions.The addition of weight vectors to the classifiers
Using convex hulls to represent classifier conditions
TLDR
A novel representation of classifier conditions based on convex hulls, represented by a sets of points in the problem space, is presented and its performance is compared to that of XCSF with interval conditions.
'Neural-gas' network for vector quantization and its application to time-series prediction
TLDR
It is shown that the dynamics of the reference (weight) vectors during the input-driven adaptation procedure are determined by the gradient of an energy function whose shape can be modulated through a neighborhood determining parameter and resemble the dynamicsof Brownian particles moving in a potential determined by a data point density.
Bounding the Population Size to Ensure Niche Support in XCS
Michigan-style learning classifier systems evolve a problem solution maintaining a set of sub-solutions distributed to potentially overlapping problem subspaces. Together, the set of sub-solutions,
A Ruleset Reduction Algorithm for the XCS Learning Classifier System
TLDR
A new reduction algorithm is described which is shown to be similarly effective to Wilson's recent technique, but with considerably more favourable time complexity, and it is suggested that it may be preferable to Wilson’s algorithm in many cases with particular requirements concerning the speed/performance tradeoff.
Classifier Fitness Based on Accuracy
TLDR
A classifier system, XCS, is investigated, in which each classifier maintains a prediction of expected payoff, but the classifier's fitness is given by a measure of the prediction's accuracy, making it suitable for a wide range of reinforcement learning situations where generalization over states is desirable.
...
1
2
3
4
5
...