Neuroevolution with CMA-ES for Real-time Gain Tuning of a Car-like Robot Controller
@inproceedings{Hill2019NeuroevolutionWC, title={Neuroevolution with CMA-ES for Real-time Gain Tuning of a Car-like Robot Controller}, author={Ashley Hill and Eric Lucet and Roland Lenain}, booktitle={International Conference on Informatics in Control, Automation and Robotics}, year={2019} }
This paper proposes a method for dynamically varying the gains of a mobile robot controller that takes into account, not only errors to the reference trajectory but also the uncertainty in the localisation. To do this, the covariance matrix of a state observer is used to indicate the precision of the perception. CMA-ES, an evolutionary algorithm is used to train a neural network that is capable of adapting the robot’s behaviour in real-time. Using a car-like vehicle model in simulation…
Figures and Tables from this paper
4 Citations
Online gain setting method for path tracking using CMA-ES: Application to off-road mobile robot control
- Computer Science, Engineering2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
- 2020
This paper proposes a new approach for online control law gains adaptation, through the use of neural networks and the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) algorithm, in order to…
A New Neural Network Feature Importance Method: Application to Mobile Robots Controllers Gain Tuning
- Computer ScienceICINCO
- 2020
This paper proposes a new approach for feature importance of neural networks and subsequently a methodology using the novel feature importance to determine useful sensor information in high…
Cooperative Avoidance Control with Relative Velocity Information and Collision Sector Functions for Car-like Robots*
- Engineering2020 American Control Conference (ACC)
- 2020
Relative velocity information and collision sector functions between the host robot and its neighbors are introduced to adjust the gain of avoidance controllers in real-time, such that controllers create smoother maneuvers and thus further improving the dynamic performance of the system.
References
SHOWING 1-10 OF 31 REFERENCES
Sim-to-Real Transfer with Neural-Augmented Robot Simulation
- Computer ScienceCoRL
- 2018
This work introduces a method for training a recurrent neural network on the differences between simulated and real robot trajectories and then using this model to augment the simulator, which can be used to learn control policies that transfer significantly better to real environments than policies learned on existing simulators.
Incremental Q-learning strategy for adaptive PID control of mobile robots
- Computer ScienceExpert Syst. Appl.
- 2017
Robustness of adaptive control of robots: Theory and experiment
- Engineering
- 1991
The robustness of adaptive control of rigid robots and methods for improving robustness in the face of unmodeled dynamics and external disturbances are discussed.
Sim-to-Real: Learning Agile Locomotion For Quadruped Robots
- Computer Science, EngineeringRobotics: Science and Systems
- 2018
This system can learn quadruped locomotion from scratch using simple reward signals and users can provide an open loop reference to guide the learning process when more control over the learned gait is needed.
Support vector machine-based two-wheeled mobile robot motion control in a noisy environment
- Engineering
- 2008
In this paper, a support vector machine (SVM)-based control scheme of a two-wheeled mobile robot is proposed in a noisy environment. The noisy environment is defined as the measured data with…
Using CMA-ES for tuning coupled PID controllers within models of combustion engines
- Computer ScienceNeural Network World
- 2019
This paper forms the problem as a black-box optimization problem and finds and tunes the appropriate optimization algorithm: Covariance Matrix Adaptation Evolution Strategy (CMA-ES) with bi-population restart strategy, elitist parent selection and active covariance matrix adaptation.
Continuous control with deep reinforcement learning
- Computer ScienceICLR
- 2016
This work presents an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces, and demonstrates that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
Tuning PI controllers for integrator/dead time processes
- Engineering
- 1992
Chien and Fruehauf proposed an internal model control (IMC) approach to selecting the tuning constants for a PI controller in a process consisting of a pure integrator and a dead time. The only…
Robust sideslip angles observer for accurate off-road path tracking control
- EngineeringAdv. Robotics
- 2017
A control strategy to achieve high accurate path tracking in off-road conditions is proposed, based on adaptive and predictive techniques to account for sliding effects and actuator properties.