A Safe Hierarchical Planning Framework for Complex Driving Scenarios based on Reinforcement Learning

@article{Li2021ASH,
  title={A Safe Hierarchical Planning Framework for Complex Driving Scenarios based on Reinforcement Learning},
  author={Jinning Li and Liting Sun and Masayoshi Tomizuka and Wei Zhan},
  journal={2021 IEEE International Conference on Robotics and Automation (ICRA)},
  year={2021},
  pages={2660-2666}
}
  • Jinning LiLiting Sun W. Zhan
  • Published 17 January 2021
  • Computer Science
  • 2021 IEEE International Conference on Robotics and Automation (ICRA)
Autonomous vehicles need to handle various traffic conditions and make safe and efficient decisions and maneuvers. However, on the one hand, a single optimization/sampling-based motion planner cannot efficiently generate safe trajectories in real time, particularly when there are many interactive vehicles near by. On the other hand, end-to-end learning methods cannot assure the safety of the outcomes. To address this challenge, we propose a hierarchical behavior planning framework with a set of… 

Figures and Tables from this paper

Bi-Level Optimization Augmented with Conditional Variational Autoencoder for Autonomous Driving in Dense Traffic

A parameterized bi-level optimization that jointly computes the optimal behavioural decisions and the resulting downstream trajectory is presented; a novel alternative that outperforms state-of-the- art model predictive control and RL approaches in terms of collision rate while being competitive in driving efficiency.

Physics-Aware Safety-Assured Design of Hierarchical Neural Network based Planner

This work proposes a hierarchical neural network based planner that analyzes the underlying physical scenarios of the system and learns a system-level behavior planning scheme with multiple scenario-specific motion-planning strategies and develops an efficient verification method.

Cola-HRL: Continuous-Lattice Hierarchical Reinforcement Learning for Autonomous Driving

This work proposes a Continuous-Lattice Hierarchical RL (Cola-HRL) method for autonomous driving tasks to make high-quality decisions in various scenarios, utilizing the continuous-lattice module to generate reasonable goals, ensuring temporal and spatial reachability.

Decision-making and Planning Framework with Prediction-Guided Strategy Tree Search Algorithm for Uncontrolled Intersections

A cooperative framework composed of a Primary Driver for motion planning and a Subordinate Driver for decision-making, which works as a collision checker and a low-level motion planner to generate a safe and smooth trajectory.

Safe Reinforcement Learning for Urban Driving using Invariably Safe Braking Sets

A novel safety layer is added to the RL process to verify the safety of high-level actions before they are performed, based on invariably safe braking sets to constrain actions for safe lane changing and safe intersection crossing.

Safety-based Reinforcement Learning Longitudinal Decision for Autonomous Driving in Crosswalk Scenarios

A novel reinforcement learning method is presented for resolving interaction uncertainty in the decision-making problem and improves driving safety and efficiency significantly when compared to alternative approaches and can be generalized to more difficult scenarios.

Interactive Planning for Autonomous Driving in Intersection Scenarios Without Traffic Signs

A planning framework, based on the partially observable Markov decision process (POMDP), to ensure social compliance and optimize the motion response of autonomous vehicles is proposed and utilizes scattering methods for probability updating and intent determination to improve the algorithm’s adaptability to real-world scenarios.

Adaptive Decision Making at the Intersection for Autonomous Vehicles Based on Skill Discovery

This work proposes a hierarchical framework that can autonomously accumulate and reuse knowledge and decomposes complex problems into multiple basic subtasks to reduce the difficulty.

Interaction-aware Decision-making for Automated Vehicles using Social Value Orientation

The authors introduce a framework based on Social Value Orientation and Deep Reinforcement Learning that is capable of generating decision-making policies with different driving styles that shows how the developed model exhibits natural driving behaviours, such as short-stopping, to facilitate the pedestrian’s crossing.

Hierarchical Planning Through Goal-Conditioned Offline Reinforcement Learning

This paper proposes a hierarchical planning framework, consisting of a low-level goal-conditioned RL policy and a high- level goal planner, and adopts a Conditional Variational Autoencoder to sample meaningful high-dimensional sub-goal candidates and to solve the high-level long-term strategy optimization problem.

References

SHOWING 1-10 OF 28 REFERENCES

Constrained iterative LQR for on-road autonomous driving motion planning

The Constrained Iterative LQR (CILQR) is proposed to handle the constraints in ILQR and Simulation case studies show the capability of the CILZR algorithm to solve the on road driving motion planning problem.

EvolveGraph: Multi-Agent Trajectory Prediction with Dynamic Relational Reasoning

This paper proposes a generic trajectory forecasting framework with explicit relational structure recognition and prediction via latent interaction graphs among multiple heterogeneous, interactive agents and introduces a double-stage training pipeline which not only improves training efficiency and accelerates convergence, but also enhances model performance.

Reinforcement Learning based Control of Imitative Policies for Near-Accident Driving

A hierarchical reinforcement and imitation learning approach that consists of low-level policies learned by IL for discrete driving modes, and a high-level policy learned by RL that switches between different driving modes that can achieve higher efficiency and safety compared to other methods.

Learning hierarchical behavior and motion planning for autonomous driving

This work introduces hierarchical behavior and motion planning (HBMP) to explicitly model the behavior in learning-based solution by integrating a classical sampling-based motion planner, of which the optimal cost is regarded as the rewards for high-level behavior learning.

Interpretable End-to-End Urban Autonomous Driving With Latent Deep Reinforcement Learning

An interpretable deep reinforcement learning method for end-to-end autonomous driving, which is able to handle complex urban scenarios and provides a better explanation of how the car reasons about the driving environment.

Social Attention for Autonomous Decision-Making in Dense Traffic

This work proposes an attention-based architecture that satisfies all these properties and explicitly accounts for the existing interactions between the traffic participants, and shows that this architecture leads to significant performance gains and is able to capture interactions patterns that can be visualised and qualitatively interpreted.

Hierarchical Reinforcement Learning Method for Autonomous Vehicle Behavior Planning

This work proposes a behavior planning structure based on hierarchical reinforcement learning (HRL) which is capable of performing autonomous vehicle planning tasks in simulated environments with multiple sub-goals and shows that the proposed method converges to an optimal policy faster than traditional RL methods.

Safety Augmented Value Estimation From Demonstrations (SAVED): Safe Deep Model-Based RL for Sparse Cost Robotic Tasks

A new model-based reinforcement learning algorithm, SAVED, which uses supervision that only identifies task completion and a modest set of suboptimal demonstrations to constrain exploration and learn efficiently while handling complex constraints, making it feasible to safely learn a control policy directly on a real robot in less than an hour.

INTERACTION Dataset: An INTERnational, Adversarial and Cooperative moTION Dataset in Interactive Driving Scenarios with Semantic Maps

An INTERnational, Adversarial and Cooperative moTION dataset (INTERACTION dataset) in interactive driving scenarios with semantic maps for highly complex behavior such as negotiations, aggressive/irrational decisions and traffic rule violations is presented.

A Deep Reinforcement Learning Driving Policy for Autonomous Road Vehicles

This work proposes a driving policy based on Reinforcement Learning that makes minimal or no assumptions about the environment and is compared against an optimal policy derived via Dynamic Programming and against manual driving simulated by SUMO traffic simulator.