Accuracy-Based Learning Classifier Systems: Models, Analysis and Applications to Classification Tasks

@article{BernadMansilla2003AccuracyBasedLC,
  title={Accuracy-Based Learning Classifier Systems: Models, Analysis and Applications to Classification Tasks},
  author={Ester Bernad{\'o}-Mansilla and Josep Maria Garrell i Guiu},
  journal={Evolutionary Computation},
  year={2003},
  volume={11},
  pages={209-238}
}
Recently, Learning Classifier Systems (LCS) and particularly XCS have arisen as promising methods for classification tasks and data mining. This paper investigates two models of accuracy-based learning classifier systems on different types of classification problems. Departing from XCS, we analyze the evolution of a complete action map as a knowledge representation. We propose an alternative, UCS, which evolves a best action map more efficiently. We also investigate how the fitness pressure… 
Strength-based learning classifier systems revisited: Effective rule evolution in supervised classification tasks
TLDR
This work presents an investigation of strength-based LCS in the domain of supervised classification, and extensive analysis of the learning dynamics involved in these systems provides proof of their potential as real-world DM tools, inducing tractable rule-based classification models, even in the presence of severe class imbalances.
On Taxonomy and Evaluation of Feature Selection‐Based Learning Classifier System Ensemble Approaches for Data Mining Problems
TLDR
A conceptual framework that allows for appropriately categorize ensemble‐based methods for fair comparison and highlights the gaps in the corresponding literature is proposed and rough set feature selection‐based LCS ensemble methods are compared.
Performance analysis of rough set ensemble of learning classifier systems with differential evolution based rule discovery
TLDR
The rough set based ensemble learning approach and differential evolution based rule searching out-perform the base LCS on classification accuracy over the data sets considered, and results show that small ensemble size is sufficient to obtain good performance.
Neural-Based Learning Classifier Systems
TLDR
This paper proposes a novel way to incorporate NNs into UCS by using a simple artificial NN as the classifier's action, and obtains a more compact population size, better generalization, and the same or better accuracy while maintaining a reasonable level of expressiveness.
Adaptive artificial datasets through learning classifier systems for classification tasks
TLDR
The idea here is to tune the datasets autonomously such that the problem characteristics may be determined efficiently to empirically test the learning bounds of the classification agent by lowering human involvement.
Genetic-based machine learning systems are competitive for pattern recognition
TLDR
The state of the art in GBML is reviewed, some of the best representatives of different families are selected, and the accuracy and the interpretability of their models are compared, which can be used as recommendation guidelines on which systems should be employed depending on whether the user prefers to maximize the accuracy or theinterpretability of the models.
Problem Driven Machine Learning by Co-evolving Genetic Programming Trees and Rules in a Learning Classifier System
TLDR
This paper hypothesizes that if an LCS population includes and co-evolves two disparate representations than the system can adapt the appropriate representation to best capture meaningful patterns of association, regardless of the complexity of that association, or the nature of the endpoint.
Towards better generalization in Pittsburgh learning classifier systems
TLDR
Experimental results on various benchmark classification problems reveal that EDARIC has better generalization ability in case of both standard and imbalanced datasets compared to many existing algorithms in the literature.
How should learning classifier systems cover a state-action space?
TLDR
The learning strategy improves the stability of XCS performance compared with the existing strategies on all types of noise employed in this paper, and supports that claim that existing learning strategies have dependencies on the types of Noise in reinforcement learning problems.
Adaptive artificial datasets through learning classifier systems for classification tasks
TLDR
An autonomous classification problem generation approach to tune the problem’s difficulty autonomously such that the problem's characteristics may be determined effectively, and this framework can empirically test the learning bounds of the classification agent whilst lowering human involvement.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 45 REFERENCES
An Analysis of Generalization in the XCS Classifier System
  • P. Lanzi
  • Mathematics, Computer Science
    Evolutionary Computation
  • 1999
TLDR
It is shown that XCS's generalization mechanism is effective, but that the conditions under which it works must be clearly understood, and the compactness of the representation evolved by XCS is limited by the number of instances of each generalization actually present in the environment.
Classifier Fitness Based on Accuracy
TLDR
A classifier system, XCS, is investigated, in which each classifier maintains a prediction of expected payoff, but the classifier's fitness is given by a measure of the prediction's accuracy, making it suitable for a wide range of reinforcement learning situations where generalization over states is desirable.
Instance-based learning algorithms
Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed
XCS and GALE: A Comparative Study of Two Learning Classifier Systems on Data Mining
This paper compares the learning performance, in terms of prediction accuracy, of two genetic-based learning systems, XCS and GALE, with six well-known learning algorithms, coming from instance based
Implicit Niching in a Learning Classifier System: Nature's Way
TLDR
This work isolates one crucial subfunction in the LCS learning algorithm: covering through niching, and can bring results to bear in understanding the fundamental type of cooperation (a.k.a. weak cooperation) that an LCS must promote.
XCS and the Monk's Problems
TLDR
It is demonstrated that XCS is able to produce a classification performance and rule set which exceeds the performance of most current Machine Learning techniques when applied to the Monk's problems.
XCS Classifier System Reliably Evolves Accurate, Complete, and Minimal Representations for Boolean Functions
Wilson’s recent XCS classifier system forms complete mappings of the payoff environment in the reinforcement learning tradition thanks to its accuracy based fitness. According to Wilson’s
What Makes a Problem Hard for XCS?
TLDR
This work considers several dimensions of problem complexity for Wilson's accuracy-based XCS, considers their interactions, identifies bounding cases of difficulty, and considers complexity metrics for XCS to make the task more tractable.
Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms
TLDR
This article reviews five approximate statistical tests for determining whether one learning algorithm outperforms another on a particular learning task and measures the power (ability to detect algorithm differences when they do exist) of these tests.
C4.5: Programs for Machine Learning
TLDR
A complete guide to the C4.5 system as implemented in C for the UNIX environment, which starts from simple core learning methods and shows how they can be elaborated and extended to deal with typical problems such as missing data and over hitting.
...
1
2
3
4
5
...