GP-RARS: evolving controllers for the Robot Auto Racing Simulator

  title={GP-RARS: evolving controllers for the Robot Auto Racing Simulator},
  author={Yehonatan Shichel and Moshe Sipper},
  journal={Memetic Computing},
We use evolutionary computation techniques to create real-time reactive controllers for a race-car simulation game: RARS (Robot Auto Racing Simulator). Using genetic programming to evolve driver controllers, we create highly generalized game-playing agents, able to outperform most human-crafted controllers and all machine-designed ones on a variety of game tracks. 
5 Citations
Efficient Evolution of Modular Robot Control via Genetic Programming
Experimental results show that the mechanisms set forth contribute to significant increase in the efficiency of the evolution of fast moving and sensing Snakebots as well as the robustness of the final designs.
Let the Games Evolve
I survey my group’s results over the past six years within the game area, demonstrating continual success in evolving winning strategies for challenging games and puzzles, including: chess,
Incremental evolution of fast moving and sensing simulated snake-like robot with multiobjective GP and strongly-typed crossover
Experimental results show that the mechanism set forth contribute to significant increase in the efficiency of the evolution of fast moving and sensing Snakebots as well as the robustness of the final designs.
HICMA: A human imitating cognitive modeling agent using statistical methods and evolutionary computation
  • M. Fayek, Osama S. Farag
  • Computer Science
    2014 IEEE Symposium on Computational Intelligence for Human-like Intelligence (CIHLI)
  • 2014
The Human Imitating Cognitive Modeling Agent (HICMA), a proposed updated version of Minsky's society of mind theory where society agents evaluate and evolve each other in a novel way, is introduced.
Moshe Sipper: Evolved to Win
  • T. Gosling
  • Business
    Genetic Programming and Evolvable Machines
  • 2012
The book provides a good overview of Professor Sipper’s work, a guide for those looking to use EC, and genetic programming in particular, within this realm, and is an interesting read both for people working in EC and those involved in games who are not already familiar with EC.


Evolving driving controllers using Genetic Programming
It is shown how Genetic Programming improved upon a manually crafted race car driver (proportional controller) and the open race car simulator TORCS was used to evaluate the virtual drivers.
Evolving controllers for simulated car racing
The evolution of controllers for racing a simulated radio-controlled car around a track, modelled on a real physical track, showed the only controller that able to evolve good racing behaviour was based on neural network acting on egocentric inputs.
Learning to Race: Experiments with a Simulated Race Car
This work has implemented a reinforcement learning architecture as the reactive component of a two layer control system for a simulated race car and found that separating the layers has expedited gradually improving performance.
On-line neuroevolution applied to The Open Racing Car Simulator
This paper proposes an on-line neuroevolution approach to evolve non-player characters in The Open Car Racing Simulator (TORCS), a state-of-the-art open source car racing simulator and shows that the approach can effectively improve the performance achieved during the learning process.
Towards automatic personalised content creation for racing games
An evolvable track representation is devised, and a multiobjective evolutionary algorithm maximises the entertainment value of the track relative to a particular human player.
Optimising the Performance of a Formula One Car Using a Genetic Algorithm
The use of a genetic algorithm is described to optimize 66 setup parameters for a simulation of a Formula One car and performance improvements are demonstrated better than all other methods tested.
Controller for TORCS created by imitation
This paper is an initial approach to create a controller for the game TORCS by learning how another controller or humans play the game. We used data obtained from two controllers and from one human
Reinforcement Learning for Racecar Control
Results show reinforcement learning can work within the Robot Automobile Racing Simulator, and lay the foundations for building a more efficient and competitive agent.
MoNiF: a modular neuro-fuzzy controller for race car navigation
  • K. C. Ng, Ruggero Scorcioni, M. Trivedi, N. Lassiter
  • Computer Science
    Proceedings 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation CIRA'97. 'Towards New Computational Principles for Robotics and Automation'
  • 1997
It is shown that this modular control method, equipped with the pretrained knowledge of only few simple expert rules, learns much faster than a NNC without any apriori knowledge.
On the Origin of Environments by Means of Natural Selection
The field of adaptive robotics involves simulations and real-world implementations of robots that adapt to their environments. In this article, I introduce adaptive environmentics -- the flip side of