• Corpus ID: 239616109

ModEL: A Modularized End-to-end Reinforcement Learning Framework for Autonomous Driving

  title={ModEL: A Modularized End-to-end Reinforcement Learning Framework for Autonomous Driving},
  author={Guan Wang and Haoyi Niu and Desheng Zhu and Jianming Hu and Xianyuan Zhan and Guyue Zhou},
  • Guan Wang, Haoyi Niu, +3 authors Guyue Zhou
  • Published 22 October 2021
  • Computer Science
  • ArXiv
Heated debates continue over the best autonomous driving framework. The classic modular pipeline is widely adopted in the industry owing to its great interpretability and stability, whereas the end-to-end paradigm has demonstrated considerable simplicity and learnability along with the rise of deep learning. We introduce a new modularized end-to-end reinforcement learning framework (ModEL) for autonomous driving, which combines the merits of both previous approaches. The autonomous driving… 


Driving Policy Transfer via Modularity and Abstraction
This work presents an approach to transferring driving policies from simulation to reality via modularity and abstraction, inspired by classic driving systems and aims to combine the benefits of modular architectures and end-to-end deep learning approaches.
End-to-End Driving Via Conditional Imitation Learning
This work evaluates different architectures for conditional imitation learning in vision-based driving and conducts experiments in realistic three-dimensional simulations of urban driving and on a 1/5 scale robotic truck that is trained to drive in a residential area.
CIRL: Controllable Imitative Reinforcement Learning for Vision-based Self-driving
This work presents a general and principled Controllable Imitative Reinforcement Learning (CIRL) approach which successfully makes the driving agent achieve higher success rates based on only vision inputs in a high-fidelity car simulator, which performs better than supervised imitation learning.
Learning a Decision Module by Imitating Driver's Control Behaviors
This work proposes a hybrid framework to learn neural decisions in the classical modular pipeline through end-to-end imitation learning that can preserve the merits of the classical pipeline such as the strict enforcement of physical and logical constraints while learning complex driving decisions from data.
Learning to Drive in a Day
This work demonstrates a new framework for autonomous driving which moves away from reliance on defined logical rules, mapping, and direct supervision and provides a general and easy to obtain reward: the distance travelled by the vehicle without the safety driver taking control.
Virtual to Real Reinforcement Learning for Autonomous Driving
A novel realistic translation network is proposed to make model trained in virtual environment be workable in real world, and is believed to be the first successful case of driving policy trained by reinforcement learning that can adapt to real world driving data.
RL-CycleGAN: Reinforcement Learning Aware Simulation-to-Real
The RL-CycleGAN, a new approach for simulation-to-real-world transfer for reinforcement learning, is obtained by incorporating the RL-scene consistency loss into unsupervised domain translation, which ensures that the translation operation is invariant with respect to the Q-values associated with the image.
Simulation-Based Reinforcement Learning for Real-World Autonomous Driving
This work uses reinforcement learning in simulation to obtain a driving system controlling a full-size real-world vehicle that takes RGB images from a single camera and their semantic segmentation as input and achieves successful sim-to-real policy transfer.
Urban Driving with Conditional Imitation Learning
This work presents an end-to-end conditional imitation learning approach, combining both lateral and longitudinal control on a real vehicle for following urban routes with simple traffic.
Learning to Drive from Simulation without Real World Labels
This work presents a method for transferring a vision-based lane following driving policy from simulation to operation on a rural road without any real-world labels and assesses the driving performance using both open-loop regression metrics, and closed-loop performance operating an autonomous vehicle on rural and urban roads.