Generative encoding for multiagent learning

@inproceedings{DAmbrosio2008GenerativeEF,
  title={Generative encoding for multiagent learning},
  author={David B. D'Ambrosio and Kenneth O. Stanley},
  booktitle={Annual Conference on Genetic and Evolutionary Computation},
  year={2008}
}
This paper argues that multiagent learning is a potential "killer application" for generative and developmental systems (GDS) because key challenges in learning to coordinate a team of agents are naturally addressed through indirect encodings and information reuse. [] Key Method In this paper, to establish the promise of this capability, the Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) generative approach to evolving neurocontrollers learns a set of coordinated policies encoded by a…

Figures from this paper

Multiagent Learning Through Indirect Encoding

A new approach to multiagent learning that represents teams as a pattern of policies rather than individual agents, called multiagent HyperNEAT, and tends to dictate its role within that team, called the policy geometry.

Evolving policy geometry for scalable multiagent learning

This paper presents an alternative evolutionary approach to multiagent learning called multiagent HyperNEAT that encodes the team as a pattern of related policies rather than as a set of individual agents, and introduces policy geometry to describe the relationship between each agent's policy and its canonical geometric position within the team.

Scalable multiagent learning through indirect encoding of policy geometry

An alternative approach to multiagent learning called multiagent HyperNEAT is presented that represents the team as a pattern of policies rather than as a set of individual agents, and is compared to a traditional learning method, multiagent Sarsa(λ), in a predator–prey domain, where it demonstrates its ability to train large teams.

Task switching in multirobot learning through indirect encoding

S situational policy geometry is introduced, which allows each agent to encode multiple policies that can be switched depending on the agent's state, which is demonstrated both in simulation and in real Khepera III robots in a patrol and return task.

Learning From Geometry In Learning For Tactical And Strategic Decision Domains

This dissertation presents a new NE algorithm called Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT), based on a novel indirect encoding of ANNs, designed to work in tactical and strategic decision domains.

Neuro-evolution behavior transfer for collective behavior tasks

Experimental results indicate a hybrid of objective-based search and behavioral diversity maintenance in evolutionary controller design coupled with behavior transfer yields evolved behaviors of significantly high quality across increasingly complex multi-agent tasks.

Evolving multimodal behavior through modular multiobjective neuroevolution

This dissertation expands on existing neuroevolution methods, specifically NEAT (Neuro-Evolution of Augmenting Topologies [7]), to make the discovery of multiple modes of behavior possible and proposes four extensions: (1) multiobjective evolution, (2) sensors that are split up according to context, (3) modular neural network structures, and (4) fitness-based shaping.

Evolving Neural Networks with HyperNEAT and Online Training

This research focuses on augmenting HyperNEAT technology for use in agent controllers through strategic application of online learning via supervised backpropagation, and several methods explored.

Evolution of neural networks

This tutorial will review neuroevolution methods that evolve fixed-topology networks, network topologies, and network construction processes, ways of combining gradient-based training with evolutionary methods, and applications of these techniques in control, robotics, artificial life, games, image processing, and language.

Evolving Multimodal Behavior

This proposed dissertation develops a method for discovering multimodal behavior via neuroevolution and improves the evolution of multimodals further in the following ways: methods of overcoming stagnation via behavioral diversity enhancement will be developed, the new-mode mutation will be improved, and different methods of arbitrating between the multiple modes evaluated.
...

References

SHOWING 1-10 OF 30 REFERENCES

Cooperative Multi-Agent Learning: The State of the Art

This survey attempts to draw from multi-agent learning work in a spectrum of areas, including RL, evolutionary computation, game theory, complex systems, agent modeling, and robotics, and finds that this broad view leads to a division of the work into two categories.

Coevolution of Role-Based Cooperation in Multiagent Systems

First, the approach is shown to be more efficient than evolving a single central controller for all agents, and second, cooperation is found to be most efficient through stigmergy, i.e., through role-based responses to the environment, rather than communication between the agents.

Competitive Coevolution through Evolutionary Complexification

It is argued that complexification, i.e. the incremental elaboration of solutions through adding new structure, achieves both these goals and is demonstrated through the NeuroEvolution of Augmenting Topologies (NEAT) method, which evolves increasingly complex neural network architectures.

A Hypercube-Based Indirect Encoding for Evolving Large-Scale Neural Networks

The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution.

Evolving Neural Networks through Augmenting Topologies

A method is presented, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task and shows how it is possible for evolution to both optimize and complexify solutions simultaneously.

Multiagent Systems: A Survey from a Machine Learning Perspective

This survey of MAS is intended to serve as an introduction to the field and as an organizational framework, and highlights how multiagent systems can be and have been used to build complex systems.

A Hypercube-Based Encoding for Evolving Large-Scale Neural Networks

The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution.

Improving Coevolutionary Search for Optimal Multiagent Behaviors

This paper examines the idea of modifying traditional coevolution, biasing it to search for maximal rewards, and concludes that biasing can help coev evolution find better results in some multiagent problem domains.

Neuroevolution for adaptive teams

It is shown how ATAs can be evolved to solve the problem posed by a simple strategy game and their application to richer environments is discussed.

A novel generative encoding for exploiting neural network sensor and output geometry

A method for evolving connective CPPNs called Hypercube-based Neuroevolution of Augmenting Topologies (HyperNEAT) discovers sensible repeating motifs that take advantage of two different placement schemes, demonstrating the utility of such an approach.