odNEAT: An Algorithm for Decentralised Online Evolution of Robotic Controllers

@article{Silva2015odNEATAA,
  title={odNEAT: An Algorithm for Decentralised Online Evolution of Robotic Controllers},
  author={Fernando Silva and Paulo Urbano and L. Correia and Anders Lyhne Christensen},
  journal={Evolutionary Computation},
  year={2015},
  volume={23},
  pages={421-449}
}
Online evolution gives robots the capacity to learn new tasks and to adapt to changing environmental conditions during task execution. Previous approaches to online evolution of neural controllers are typically limited to the optimisation of weights in networks with a prespecified, fixed topology. In this article, we propose a novel approach to online learning in groups of autonomous robots called odNEAT. odNEAT is a distributed and decentralised neuroevolution algorithm that evolves both… 
Hyper-Learning Algorithms for Online Evolution of Robot Controllers
TLDR
This study conducts a comprehensive assessment of a novel approach called online hyper-evolution (OHE), which facilitates the evolution of controllers with high performance, and can increase effectiveness at different stages of evolution by combining the benefits of multiple algorithms over time.
Evolutionary online behaviour learning and adaptation in real robots
TLDR
Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality.
Evolutionary online behaviour learning and adaptation in robotic systems
TLDR
The main goal of this thesis is to address some of the fundamental issues associated with online evolution to bring it closer to widespread adoption by studying if and how to accelerate and increase the performance of online evolution.
Online Hyper-evolution of Controllers in Multirobot Systems
TLDR
The study shows that OHE is an effective new paradigm to the synthesis of controllers for robots by combining the benefits of multiple algorithms over time.
Engineering Online Evolution of Robot Behaviour: (Doctoral Consortium)
TLDR
This research study study how to accelerate and scale online evolution to more complex tasks while minimising the amount of human intervention to enable the realisation of real-world multirobot systems that can effectively learn new behaviours and adapt online to take on dynamic tasks in a timely manner.
Leveraging Online Racing and Population Cloning in Evolutionary Multirobot Systems
TLDR
Two novel approaches to accelerate online evolution in multirobot systems are introduced: a racing technique to cut short the evaluation of poor controllers based on the task performance of past controllers, and a population cloning technique that enables individual robots to transmit an internal set of high-performing controllers to robots nearby.
Decentralized Innovation Marking for Neural Controllers in Embodied Evolution
TLDR
This paper proposes a novel innovation marking method for Neuro-Evolution of Augmenting Topologies in Embodied Evolutionary Robotics that is inspired from event dating algorithms, based on logical clocks, that are used in distributed systems, where clock synchronization is not possible.
R-HybrID: Evolution of Agent Controllers with a Hybrisation of Indirect and Direct Encodings
TLDR
The results show that R-HybrID consistently outperforms three state-of-the-art neuroevolution algorithms, and effectively evolves complex controllers and behaviours.
Evolution of Collective Behaviors for a Real Swarm of Aquatic Surface Robots
TLDR
This paper demonstrates for the first time a swarm robotics system with evolved control successfully operating in a real and uncontrolled environment and validates that the evolved controllers display key properties of swarm intelligence-based control, namely scalability, flexibility, and robustness on the real swarm.
Open Issues in Evolutionary Robotics
TLDR
The benefits and challenges of simulation-based evolution and subsequent deployment of controllers versus evolution on real robotic hardware are analyzed and the role of genomic encoding and genotype-phenotype mapping in the evolution of controllers for complex tasks is addressed.
...
...

References

SHOWING 1-10 OF 49 REFERENCES
odNEAT: An Algorithm for Distributed Online, Onboard Evolution of Robot Behaviours
TLDR
This work proposes and evaluates a novel approach to online distributed evolution of neural controllers called odNEAT, a completely distributed evolutionary algorithm for online learning in groups of embodied agents such as robots that approximates the performance of rtNEAT.
Speeding Up Online Evolution of Robotic Controllers with Macro-neurons
TLDR
This paper shows that evolution is able to progressively complexify controllers by using the behavioural building blocks as a substrate, and macro-neurons enable a significant reduction in the adaptation time and the synthesis of high performing solutions.
Online Evolution in Dynamic Environments using Neural Networks in Autonomous Robots
TLDR
This work investigates an online evolutionary process in simulated swarm robots using recurrent neural networks as controllers and presents a distributed online evolutionary algorithm that uses structural evolution and adaptive fitness to cope with dynamic environments.
An On-Line On-Board Distributed Algorithm for Evolutionary Robotics
TLDR
This work proposes an evag-based on-board evolutionary algorithm, where controllers are exchanged among robots that evolve simultaneously, and compares it with the (μ+1) on-line algorithm, which implements evolutionary adaptation inside a single robot.
Embedded Evolutionary Robotics: The (1+1)-Restart-Online Adaptation Algorithm
TLDR
This paper deals with online onboard behavior optimization for a autonomous mobile robot in the scope of the European FP7 Symbrion Project and extends the (1+1)-online algorithm, which is able to converge faster and provides a richer set of relevant controllers compared to the previous implementation.
Exploratory analysis of an on-line evolutionary algorithm in simulated robots
TLDR
This paper presents an evolutionary algorithm belonging to the first category of on-line evolution for developing robot controllers, and uses the Bonesa parameter tuning method to explore its parameter space, showing that it seems preferable to try many alternative solutions and spend little effort on refining possibly faulty assessments.
Embodied, On-line, On-board Evolution for Autonomous Robotics
TLDR
This section elaborate on possible evolutionary approaches to this kind of applications, position these on a general feature map and test some of these set-ups experimentally to assess their feasibility.
Evolving Dynamical Neural Networks for Adaptive Behavior
TLDR
It is demonstrated that continuous-time recurrent neural networks are a viable mechanism for adaptive agent control and that the genetic algorithm can be used to evolve effective neural controllers.
Efficient evolution of neural networks through complexification
TLDR
This dissertation presents the NeuroEvolution of Augmenting Topologies (NEAT) method, which makes search for complex solutions feasible and is first shown faster than traditional approaches on a challenging reinforcement learning benchmark task, and used to successfully discover complex behavior in three challenging domains.
On-line evolution of robot controllers by an encapsulated evolution strategy
TLDR
The results show that longer evaluation times greatly benefit the quality of controllers as well as stability of behaviour and speed of adaptation and the mutation step-size σ proves to be of overriding importance to finding good solutions.
...
...