Learn More
The idea of using evolutionary computation to train artificial neural networks, or neuroevolution (NE), for reinforcement learning (RL) tasks has now been around for over 20 years. However, as RL tasks become more challenging, the networks required become larger, as do their genomes. But, scaling NE to large nets (i.e. tens of thousands of weights) is(More)
—A major limitation in applying evolution strategies to black box optimization is the possibility of convergence into bad local optima. Many techniques address this problem, mostly through restarting the search. However, deciding the new start location is nontrivial since neither a good location nor a good scale for sampling a random restart position are(More)
A new model of Genetic Programming with variable size population is presented in this paper and applied to the reconstruction of target functions in dynamic environments (i.e. problems where target functions change with time). The suitability of this model is tested on a set of benchmarks based on some well known symbolic regression problems. Experimental(More)
A new model of Genetic Programming with variable size population is presented in this paper and applied to the reconstruction of target functions in dynamic environments (i.e. problems where target functions change with time). The suitability of this model is tested on a set of benchmarks based on some well known symbolic regression problems. Experimental(More)
The TORCS racing simulator has become a standard testbed used in many recent reinforcement learning competitions, where an agent must learn to drive a car around a track using a small set of task-specific features. In this paper, large, recurrent neural networks (with over 1 million weights) are evolved to solve a much more challenging version of the task(More)
  • 1