Critical factors in the performance of hyperNEAT

@inproceedings{Berg2013CriticalFI,
  title={Critical factors in the performance of hyperNEAT},
  author={Thomas G. van den Berg and Shimon Whiteson},
  booktitle={Annual Conference on Genetic and Evolutionary Computation},
  year={2013}
}
HyperNEAT is a popular indirect encoding method for evolutionary computation that has performed well on a number of benchmark tasks. This paper presents a series of experiments designed to examine the critical factors for its success. First, we determine the fewest hidden nodes a genotypic network needs to solve several of these tasks. Our results show that all of these tasks are easy: they can be solved with at most one hidden node and require generating only trivial regular patterns. Then, we… 

Using GP Is NEAT: Evolving Compositional Pattern Production Functions

This article presents a comparison of different Evolutionary Computation methods to evolve Compositional Pattern Production Functions: structures that have the same goal as CPPNs, but that are encoded as functions instead of networks.

CPPNs Effectively Encode Fracture : A Response to Critical Factors in the Performance of HyperNEAT

It is demonstrated that compositional pattern producing networks (CPPNs) can produce phenotypic patterns that exhibit fracture and that when neural networks are evolved with CPPNs, as in the Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) approach, the algorithm effectively incorporates hidden nodes to improve performance.

POET: An Evo-Devo Method to Optimize the Weights of Large Artificial Neural Networks

The results demonstrate that even large networks such as those required for image classification can be effectively automatically designed by the proposed evolutionary developmental method.

Hybrid Neuroevolution – Brejchová K

This thesis proposes a hybrid approach that combines the neuroevolutionary algorithm HyperNEAT with gradient-based algorithm DQN, and concludes that the main challenge in combining the two algorithms is the different interpretability of their outputs.

Improving HybrID: How to best combine indirect and direct encoding in evolutionary algorithms

Two new methods to improve HybrID are tested by eliminating the need to manually specify when to switch from indirect to direct encoding and suggesting a path forward for automatically and simultaneously combining the best traits of indirect and direct encoding.

Hybrid Self-Attention NEAT: A novel evolutionary approach to improve the NEAT algorithm

The main conclusion is that Hybrid Self-Attention NEAT can eliminate the restriction of the original NEAT.

ALF: a fitness-based artificial life form for evolving large-scale neural networks

A new TWEANN algorithm called Artificial Life Form is proposed with the following technical advancements: speciation via structural and semantic similarity to form better candidate solutions, dynamic adaptation of the observed candidate solutions for better convergence properties, and integration of solution quality into genetic reproduction to increase the probability of optimization success.

NEAT for large-scale reinforcement learning through evolutionary feature learning and policy gradient search

A new reinforcement learning scheme based on NEAT is proposed with two key technical advancements: a new three-stage learning scheme is introduced to clearly separate feature learning and policy learning to allow effective knowledge sharing and learning across multiple agents.

Rocket Learn

A Proximal Policy Optimization model was found that was able to learn how to play the game and consistently increase its reward function scores over time and it is recommended for future tasks in similar spaces.

Draft : Deep Learning in Neural Networks : An Overview

This historical survey compactly summarises relevant work, much of it from the previous millennium, that may be viewed as a continually evolving, deep network of scientists who have influenced each other in complex ways.

References

SHOWING 1-10 OF 30 REFERENCES

Investigating whether hyperNEAT produces modular neural networks

The first documented case of HyperNEAT producing a modular phenotype is presented, but the inability to encourage modularity on harder problems where modularity would have been beneficial suggests that more work is needed to increase the likelihood that Hyper NEAT and similar algorithms produce modular ANNs in response to challenging, decomposable problems.

On the Performance of Indirect Encoding Across the Continuum of Regularity

This paper presents the first comprehensive study showing that phenotypic regularity enables an indirect encoding to outperform direct encoding controls as problem regularity increases, and suggests a path forward that combines indirect encodings with a separate process of refinement.

A Hypercube-Based Encoding for Evolving Large-Scale Neural Networks

The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution.

A novel generative encoding for evolving modular, regular and scalable networks

DSE significantly outperforms HyperNEAT on a pattern recognition problem, suggesting that its potential lay not just in the properties of the networks it produces, but also because it can compete with leading encodings at solving challenging problems.

How a Generative Encoding Fares as Problem-Regularity Decreases

As the regularity of the problem decreases, the performance of the generative representation degrades to, and then underperforms, the direct encoding, yet tends to be consistent for different types of problem regularity.

Constraining connectivity to encourage modularity in HyperNEAT

This paper investigates how altering the traditional approach to determining whether connections are expressed in HyperNEAT influences modularity, and provides an important clue to how an indirect encoding of network structure can be encouraged to evolve modularity.

Evolving Neural Networks through Augmenting Topologies

A method is presented, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task and shows how it is possible for evolution to both optimize and complexify solutions simultaneously.

The sensitivity of HyperNEAT to different geometric representations of a problem

The results suggest that HyperNEAT practitioners can obtain good results even if they do not know how to geometrically represent a problem, and that further improvements are possible with a well-chosen geometric representation.

Evolving Static Representations for Task Transfer

The idea that transfer is most effective if the representation is designed to be the same even across different tasks is explored, and a bird's eye view (BEV) representation is introduced that can represent different tasks on the same two-dimensional map.

Evolving coordinated quadruped gaits with the HyperNEAT generative encoding

It is demonstrated that HyperNEAT, a new and promising generative encoding for evolving neural networks, can evolve quadruped gaits without an engineer manually decomposing the problem.