Competitive Coevolution through Evolutionary Complexification

  title={Competitive Coevolution through Evolutionary Complexification},
  author={Kenneth O. Stanley and Risto Miikkulainen},
Two major goals in machine learning are the discovery and improvement of solutions to complex problems. In this paper, we argue that complexification, i.e. the incremental elaboration of solutions through adding new structure, achieves both these goals. We demonstrate the power of complexification through the NeuroEvolution of Augmenting Topologies (NEAT) method, which evolves increasingly complex neural network architectures. NEAT is applied to an open-ended coevolutionary robot duel domain… 

Efficient evolution of neural networks through complexification

This dissertation presents the NeuroEvolution of Augmenting Topologies (NEAT) method, which makes search for complex solutions feasible and is first shown faster than traditional approaches on a challenging reinforcement learning benchmark task, and used to successfully discover complex behavior in three challenging domains.

Coevolution of neural networks using a layered pareto archive

A technique is developed that interfaces the LAPCA algorithm with NeuroEvolution of Augmenting Topologies (NEAT), a method to evolve neural networks with demonstrated efficiency in game playing domains and combining NEAT and LAP CA is found to be an effective approach to coevolution.

Creating intelligent agents through shaping of coevolution

This paper shows how shaping can be applied to coevolution to guide it towards more effective behaviors, thus enhancing the power of coev evolution in competitive environments.

A Comparative Analysis of Simplification and Complexification in the Evolution of Neural Network Topologies

A comparative study of these dynamics, focusing on the domains of XOR and Tic-Tac-Toe, using NEAT (NeuroEvolution of Augmenting Topologies) as the starting point shows that algorithms employing both complexification and simplification dynamics search more efficiently and produce more compact solutions.

Coevolution of Multiagent Systems using NEAT

This experiment attempts to use NeuroEvolution of Augmenting Topologies (NEAT) create a Multiagent System that coevolves cooperative learning agents with a learning task in the form of three

Experiments on Neuroevolution and Online Weight Adaptation in Complex Environments

A new proposal for online weight adaptation in neuroevolved artificial neural networks is shown, and the results of several experiments carried out in a race simulation environment are presented.

Evolving Reusable Neural Modules

A coevolutionary modular neuroevolution method, Modular NeuroEvolution of Augmenting Topologies (Modular NEAT), is developed that automatically performs this decomposition during evolution, making evolutionary search more efficient.

Coevolution of intelligent agents using cartesian genetic programming

The important of the genetic transfer of learned experience and life time learning is demonstrated to demonstrate the importance of the complex dynamics produced as a result of interaction (coevolution) between two intelligent agents.

An Integrated Neuroevolutionary Approach to Reactive Control and High-Level Strategy

Experiments in this paper show that relatively unrestricted algorithms (e.g., NEAT) still yield the best performance on problems requiring reactive control, thus laying the groundwork for learning algorithms that can be applied to a wide variety of problems.

Evolving neural networks

Methods that evolve fixed-topology networks, network topologies, and network construction processes, ways of combining traditional neural network learning algorithms with evolutionary methods, and applications of neuroevolution to game playing, robot control, resource optimization, and cognitive science are reviewed.



Evolving Neural Networks through Augmenting Topologies

A method is presented, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task and shows how it is possible for evolution to both optimize and complexify solutions simultaneously.

Efficient evolution of neural network topologies

A method, NeuroEvolution of Augmenting Topologies (NEAT) that outperforms the best fixed-topology methods on a challenging benchmark reinforcement learning task and shows how it is possible for evolution to both optimize and complexify solutions simultaneously, making it possible to evolve increasingly complex solutions over time.

Efficient Reinforcement Learning Through Evolving Neural Network Topologies

NEAT shows that when structure is evolved with a principled method of crossover, by protecting structural innovation, and through incremental growth from minimal structure, learning is significantly faster and stronger than with the best fixed-topology methods.

Co-Evolutionary Learning by Automatic Modularisation with Speciation

It is demonstrated why co-evolution can sometimes fail (and fail spectacularly) to cause the desired escalation of expertise, and why a simple gating algorithm for combining these di erent strategies into a single high-level strategy is a way to improve the generalisation ability of co- Evolutionary learning.

Pareto Optimality in Coevolutionary Learning

A novel coevolutionary algorithm is developed based upon the concept of Pareto optimality, to allow agents to follow gradient and create gradient for others to follow, such that co-ev evolutionary learning succeeds.

Explorations in Evolutionary Robotics

Results demonstrate that robust visually guided control systems evolve from evaluation functions that do not explicitly require monitoring visual input, and propose an automatic design process involving artificial evolution, wherein the basic building blocks for evolving cognitive architectures are noise-tolerant dynamical neural networks.

God Save the Red Queen! Competition in Co-Evolutionary Robotics

Without any effort in fitness design, a set of interesting behaviors emerged in relatively short time, such as obstacle avoidance, straight navigation, visual tracking, object discrimination (robot vs. wall), object following, and others.

Coevolutionary search among adversaries

New methods are described that overcome these Aaws and make coevolution more efficient, able to solve several game learning test problems that cannot be efficiently solved without them.

An evolutionary algorithm that constructs recurrent neural networks

It is argued that genetic algorithms are inappropriate for network acquisition and an evolutionary program is described, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks.

New Methods for Competitive Coevolution

This work uses the games of Nim and 3-D Tic-Tac-Toe as test problems to explore three new techniques in competitive coevolution, which changes the way fitness is measured, shared sampling provides a method for selecting a strong, diverse set of parasites and the hall of fame encourages arms races by saving good individuals from prior generations.