Self organizing classifiers: first steps in structured evolutionary machine learning

@article{Vargas2013SelfOC,
  title={Self organizing classifiers: first steps in structured evolutionary machine learning},
  author={Danilo Vasconcellos Vargas and Hirotaka Takano and Junichi Murata},
  journal={Evolutionary Intelligence},
  year={2013},
  volume={6},
  pages={57-72}
}
Learning classifier systems (LCSs) are evolutionary machine learning algorithms, flexible enough to be applied to reinforcement, supervised and unsupervised learning problems with good performance. Recently, self organizing classifiers were proposed which are similar to LCSs but have the advantage that in its structured population no balance between niching and fitness pressure is necessary. However, more tests and analysis are required to verify its benefits. Here, a variation of the first… 

Novelty-organizing classifiers applied to classification and reinforcement learning: towards flexible algorithms

A new and simpler way to abstract supervised learning for any reinforcement learning algorithm called Novelty-Organizing Classifiers is developed based on a Novelty Map population that focuses more on the novelty of the inputs than their frequency.

Novelty-organizing team of classifiers - A team-individual multi-objective approach to reinforcement learning

A multi-objective reinforcement learning algorithm is proposed with a structured novelty map population evolving feedforward neural models that outperforms a gradient based continuous input-output state-of-art algorithm in two problems.

Novelty-organizing team of classifiers in noisy and dynamic environments

Novelty-Organizing Team of Classifiers (NOTC) is applied to the continuous action mountain car as well as two variations of it: a noisy mountain car and an unstable weather mountain car, revealing a trade-off between the approaches.

Robust optimization through neuroevolution

The comparison of different algorithms indicates that the CMA-ES and xNES methods, that operate by optimizing a distribution of parameters, represent the best options for the evolution of robust neural network controllers.

Connection-Aware Spectrum-Diversity for Neuroevolution

Experiments show that a connection-aware spectrum diversity allows for better results to arise over the course of evolution, justified by the fact that neural networks with a low number of connections are kept even when increasing the connections might improve slightly the results.

One-Pixel Attack: Understanding and Improving Deep Neural Networks with Evolutionary Computation

Recently, the one-pixel attack showed that deep neural networks (DNNs) can misclassify by changing only one pixel, and it will be shown the promises of evolutionary computation as both a way to investigate the robustness of DNNs as well as a ways to improve their robustness through hybrid systems, evolution of architectures, among others.

One Pixel Attack for Fooling Deep Neural Networks

This paper proposes a novel method for generating one-pixel adversarial perturbations based on differential evolution (DE), which requires less adversarial information (a black-box attack) and can fool more types of networks due to the inherent features of DE.

Understanding the One Pixel Attack: Propagation Maps and Locality Analysis

Propagation Maps reveal that even in extremely deep networks such as Resnet, modification in one pixel easily propagates until the last layer and this initial local perturbation is also shown to spread becoming a global one and reaching absolute difference values that are close to the maximum value of the original feature maps in a given layer.

The Relationship Between (Un)Fractured Problems and Division of Input Space

A study is presented showing that while fractured problemsbenefit from a finer division of the input space, unfractured problems benefit from a coarser division of input space.

References

SHOWING 1-10 OF 37 REFERENCES

Self organizing classifiers and niched fitness

A new algorithm called Self Organizing Classifiers is proposed which faces this problem from a different perspective, instead of balancing the pressures, both pressures are separated and no balance is necessary.

Learning classifier systems

A gentle introduction to LCSs and their general functionality is provided and the current theoretical understanding of the systems is surveyed, followed by a suite of current successful LCS applications and the most promising areas for future applications and research directions.

Learning classifier systems: a complete introduction, review, and roadmap

This paper aims to provide an accessible foundation for researchers of different backgrounds interested in selecting or developing their own LCS, including a simple yet thorough introduction, a historical review, and a roadmap of algorithmic components, emphasizing differences in alternative LCS implementations.

Accuracy-based Neuro And Neuro-fuzzy Classifier Systems

Results from the use of neural network-based representation schemes within the accuracy-based XCS are presented and the new representation scheme is shown to produce systems where outputs are a function of the inputs.

The parameterless self-organizing map algorithm

The relative performance of the PLSOM and the SOM is discussed and some tasks in which the SOM fails but the P LSOM performs satisfactory are demonstrated and a proof of ordering under certain limited conditions is presented.

Dynamic self-organizing maps with controlled growth for knowledge discovery

The growing self-organizing map (GSOM) is presented in detail and the effect of a spread factor, which can be used to measure and control the spread of the GSOM, is investigated.

Fuzzy-XCS: A Michigan Genetic Fuzzy System

The intention of this contribution is to propose an approach to properly develop a fuzzy XCS system for single-step reinforcement problems.

Cellular Evolutionary Algorithms: Evaluating the Influence of Ratio

It is found that, with the same neighborhood, rectangular grids have some advantages in multimodal and epistatic problems, while square ones are more efficient for solving deceptive problems and for simple function optimization.

Intrinsically motivated model learning for a developing curious agent

  • Todd HesterP. Stone
  • Computer Science
    2012 IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL)
  • 2012
Experiments show that combining the agent's intrinsic rewards with external task rewards enables the agent to learn faster than using external rewards alone, and that the learned model can be used afterward to perform tasks in the domain.

Parallel Problem Solving from Nature — PPSN VII

With relatively little effort, scaling laws that quite accurately describe the behavior of the strategy and that greatly contribute to its understanding are derived and their implications are discussed.