Neuroevolution with CMA-ES for Real-time Gain Tuning of a Car-like Robot Controller

@inproceedings{Hill2019NeuroevolutionWC,
  title={Neuroevolution with CMA-ES for Real-time Gain Tuning of a Car-like Robot Controller},
  author={Ashley Hill and Eric Lucet and Roland Lenain},
  booktitle={International Conference on Informatics in Control, Automation and Robotics},
  year={2019}
}
This paper proposes a method for dynamically varying the gains of a mobile robot controller that takes into account, not only errors to the reference trajectory but also the uncertainty in the localisation. To do this, the covariance matrix of a state observer is used to indicate the precision of the perception. CMA-ES, an evolutionary algorithm is used to train a neural network that is capable of adapting the robot’s behaviour in real-time. Using a car-like vehicle model in simulation… 

Figures and Tables from this paper

Online gain setting method for path tracking using CMA-ES: Application to off-road mobile robot control

This paper proposes a new approach for online control law gains adaptation, through the use of neural networks and the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) algorithm, in order to

A New Neural Network Feature Importance Method: Application to Mobile Robots Controllers Gain Tuning

This paper proposes a new approach for feature importance of neural networks and subsequently a methodology using the novel feature importance to determine useful sensor information in high

Cooperative Avoidance Control with Relative Velocity Information and Collision Sector Functions for Car-like Robots*

Relative velocity information and collision sector functions between the host robot and its neighbors are introduced to adjust the gain of avoidance controllers in real-time, such that controllers create smoother maneuvers and thus further improving the dynamic performance of the system.

References

SHOWING 1-10 OF 31 REFERENCES

Sim-to-Real Transfer with Neural-Augmented Robot Simulation

This work introduces a method for training a recurrent neural network on the differences between simulated and real robot trajectories and then using this model to augment the simulator, which can be used to learn control policies that transfer significantly better to real environments than policies learned on existing simulators.

Incremental Q-learning strategy for adaptive PID control of mobile robots

Maintenance of robot's equilibrium in a noisy environment with fuzzy controller

  • Laleh JalaliH. Ghafarian
  • Computer Science
    2009 IEEE International Conference on Intelligent Computing and Intelligent Systems
  • 2009
A fuzzy logic controller for a mobile robot is presented which can handle external forces and maintain robot's equilibrium in a noisy environment and its design simplicity, its ease of implementation, and its robustness properties are presented.

Robustness of adaptive control of robots: Theory and experiment

The robustness of adaptive control of rigid robots and methods for improving robustness in the face of unmodeled dynamics and external disturbances are discussed.

Sim-to-Real: Learning Agile Locomotion For Quadruped Robots

This system can learn quadruped locomotion from scratch using simple reward signals and users can provide an open loop reference to guide the learning process when more control over the learned gait is needed.

Support vector machine-based two-wheeled mobile robot motion control in a noisy environment

In this paper, a support vector machine (SVM)-based control scheme of a two-wheeled mobile robot is proposed in a noisy environment. The noisy environment is defined as the measured data with

Using CMA-ES for tuning coupled PID controllers within models of combustion engines

This paper forms the problem as a black-box optimization problem and finds and tunes the appropriate optimization algorithm: Covariance Matrix Adaptation Evolution Strategy (CMA-ES) with bi-population restart strategy, elitist parent selection and active covariance matrix adaptation.

Continuous control with deep reinforcement learning

This work presents an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces, and demonstrates that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.

Tuning PI controllers for integrator/dead time processes

Chien and Fruehauf proposed an internal model control (IMC) approach to selecting the tuning constants for a PI controller in a process consisting of a pure integrator and a dead time. The only

Robust sideslip angles observer for accurate off-road path tracking control

A control strategy to achieve high accurate path tracking in off-road conditions is proposed, based on adaptive and predictive techniques to account for sliding effects and actuator properties.