Safe Reinforcement Learning with Scene Decomposition for Navigating Complex Urban Environments

@article{Bouton2019SafeRL,
  title={Safe Reinforcement Learning with Scene Decomposition for Navigating Complex Urban Environments},
  author={Maxime Bouton and Alireza Nakhaei and Kikuo Fujimura and Mykel J. Kochenderfer},
  journal={2019 IEEE Intelligent Vehicles Symposium (IV)},
  year={2019},
  pages={1469-1476}
}
Navigating urban environments represents a complex task for automated vehicles. They must reach their goal safely and efficiently while considering a multitude of traffic participants. We propose a modular decision making algorithm to autonomously navigate intersections, addressing challenges of existing rule-based and reinforcement learning (RL) approaches. We first present a safe RL algorithm relying on a model-checker to ensure safety guarantees. To make the decision strategy robust to… 

Figures from this paper

Cooperation-Aware Reinforcement Learning for Merging in Dense Traffic

This work presents a reinforcement learning approach to learn how to interact with drivers with different cooperation levels and shows that the agent successfully learns how to navigate a dense merging scenario with less deadlocks than with online planning methods.

Reinforcement Learning with Iterative Reasoning for Merging in Dense Traffic

This work proposes a combination of reinforcement learning and game theory to learn merging behaviors, and designs a training curriculum for a reinforcement learning agent using the concept of level-k behavior.

A Multi-Task Reinforcement Learning Approach for Navigating Unsignalized Intersections

The proposed multi-task DQN algorithm outperforms baselines for all three navigation tasks in several different intersection scenarios and a vectorized reward function combining with deep Q-networks to learn to handle multiple intersection navigation tasks concurrently.

Generalizing Decision Making for Automated Driving with an Invariant Environment Representation using Deep Reinforcement Learning

This work proposes an invariant environment representation from the perspective of the ego vehicle that encodes all necessary information for safe decision making and presents a simple occlusion model that enables agents to navigate intersections with occlusions without a significant change in performance.

Multi-task Safe Reinforcement Learning for Navigating Intersections in Dense Traffic

Risk-Aware High-level Decisions for Automated Driving at Occluded Intersections with Reinforcement Learning

A generic risk-aware DQN approach in order to learn high level actions for driving through unsignalized occluded intersections and proposes a risk based reward function which punishes risky situations instead of only collision failures.

Safe Reinforcement Learning for Autonomous Lane Changing Using Set-Based Prediction

This paper addresses the lack of safety guarantees by extending reinforcement learning with a safety layer that restricts the action space to the subspace of safe actions, and demonstrates the proposed approach using lane changing in autonomous driving.

Autonomous Navigation through intersections with Graph ConvolutionalNetworks and Conditional Imitation Learning for Self-driving Cars

Evaluations on unsignaled intersections with various traffic density demonstrate that the end-to-end trainable neural network outperforms the baselines with higher success rate and shorter navigation time.

Safe Reinforcement Learning for Urban Driving using Invariably Safe Braking Sets

A novel safety layer is added to the RL process to verify the safety of high-level actions before they are performed, based on invariably safe braking sets to constrain actions for safe lane changing and safe intersection crossing.

Safe Deep Q-Network for Autonomous Vehicles at Unsignalized Intersection

We propose a safe DRL approach for autonomous vehicle (AV) navigation through crowds of pedestrians while making a left turn at an unsignalized intersection. Our method uses two long-short term

References

SHOWING 1-10 OF 24 REFERENCES

Navigating Occluded Intersections with Autonomous Vehicles Using Deep Reinforcement Learning

Using recent advances in Deep RL, a system is able to learn policies that surpass the performance of a commonly-used heuristic approach in several metrics including task completion time and goal success rate and have limited ability to generalize.

Belief state planning for autonomously navigating urban intersections

This paper frames the problem of navigating unsignalized intersections as a partially observable Markov decision process (POMDP) and solves it using a Monte Carlo sampling method and empirical results in simulation show that the resulting policy outperforms a threshold-based heuristic strategy on several relevant metrics that measure both safety and efficiency.

A Reinforcement Learning Based Approach for Automated Lane Change Maneuvers

This study proposed a Reinforcement Learning based approach to train the vehicle agent to learn an automated lane change behavior such that it can intelligently make a lane change under diverse and even unforeseen scenarios.

Tactical Decision Making for Lane Changing with Deep Reinforcement Learning

This paper presents a framework that demonstrates a more structured and data efficient alternative to end-to-end complete policy learning on problems where the high-level policy is hard to formulate using traditional optimization or rule based methods but well designed low-level controllers are available.

Reinforcement Learning with Probabilistic Guarantees for Autonomous Driving

This paper outlines a case study of an intersection scenario involving multiple traffic participants and proposes a generic approach to enforce probabilistic guarantees on an RL agent that outperforms a rule-based heuristic approach in terms of efficiency while exhibiting strong guarantees on safety.

Learning Negotiating Behavior Between Cars in Intersections using Deep Q-Learning

This paper concerns automated vehicles negotiating with other vehicles, typically human driven, in crossings with the goal to find a decision algorithm by learning typical behaviors of other

Augmented vehicle tracking under occlusions for decision-making in autonomous driving

An algorithm to support autonomous vehicles in reasoning about occluded regions of their environment to make safe, reliable decisions and can handle significantly prolonged occlusions when compared to a standard dynamic object tracking system is reported on.

Utility Decomposition with Deep Corrections for Scalable Planning under Uncertainty

An approach inspired from multi-fidelity optimization to learn a correction term with a neural network representation that leads to a significant improvement over the decomposition method alone and outperforms a policy trained on the full scale problem without utility decomposition.

Automated Driving in Uncertain Environments: Planning With Interaction and Uncertain Maneuver Prediction

This work can demonstrate that their approach performs nearly as good as with full prior information about the intentions of the other vehicles and clearly outperforms reactive approaches.

Probabilistic decision-making under uncertainty for autonomous driving using continuous POMDPs

This paper formulates the task of driving as a continuous Partially Observable Markov Decision Process (POMDP) that can be automatically optimized for different scenarios and employs a continuous POMDP solver that learns a good representation of the specific situation.