End-To-End Interpretable Neural Motion Planner

  title={End-To-End Interpretable Neural Motion Planner},
  author={Wenyuan Zeng and Wenjie Luo and Simon Suo and Abbas Sadat and Binh Yang and Sergio Casas and Raquel Urtasun},
  journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
In this paper, we propose a neural motion planner for learning to drive autonomously in complex urban scenarios that include traffic-light handling, yielding, and interactions with multiple road-users. [] Key Method We then sample a set of diverse physically possible trajectories and choose the one with the minimum learned cost. Importantly, our cost volume is able to naturally capture multi-modality. We demonstrate the effectiveness of our approach in real-world driving data captured in several cities in…

Figures and Tables from this paper

End-to-End Interactive Prediction and Planning with Optical Flow Distillation for Autonomous Driving

This paper proposes an end-to-end interactive neural motion planner (INMP) for autonomous driving that first generates a feature map in bird’s-eyeview space, which is then processed to detect other agents and perform interactive prediction and planning jointly.

MPNP: Multi-Policy Neural Planner for Urban Driving

This work proposes to explore the multi-modalities in the planning problem and force the neural planner to explicitly consider different policies by generating the future trajectories conditioned on every possible reference line, which could simply be the centerline of the surrounding lanes.

Learning Interpretable End-to-End Vision-Based Motion Planning for Autonomous Driving with Optical Flow Distillation

This work proposes an interpretable end-to-end vision-based motion planning approach for autonomous driving, referred to as IVMP, and develops an optical flow distillation paradigm, which can effectively enhance the network while still maintaining its real-time performance.

Perceive, Predict, and Plan: Safe Motion Planning Through Interpretable Semantic Representations

A novel end-to-end learnable network that performs joint perception, prediction and motion planning for self-driving vehicles and produces interpretable intermediate representations that is achieved by a novel differentiable semantic occupancy representation that is explicitly used as cost by the motion planning process.

Differentiable Integrated Motion Prediction and Planning with Learnable Cost Function for Autonomous Driving

An end- to-end differentiable framework that integrates prediction and planning modules and is able to learn the cost function from data is proposed and shows the ability to handle complex urban driving scenarios and robustness against the distributional shift that imitation learning methods suffer from.

ST-P3: End-to-end Vision-based Autonomous Driving via Spatial-Temporal Feature Learning

This paper proposes a spatial-temporal feature learning scheme towards a set of more representative features for perception, prediction and planning tasks simultaneously, which is called ST-P3 and is the first to systematically investigate each part of an interpretable end-to-end vision-based autonomous driving system.

Jointly Learnable Behavior and Trajectory Planning for Self-Driving Vehicles

Experiments on real-world self-driving data demonstrate that jointly learned planner performs significantly better in terms of both similarity to human driving and other safety metrics, compared to baselines that do not adopt joint behavior and trajectory learning.

The Importance of Prior Knowledge in Precise Multimodal Prediction

This paper designs a framework that leverages REINFORCE to incorporate non-differentiable priors over sample trajectories from a probabilistic model, thus optimizing the whole distribution and result in safer motion plans taken by the self-driving vehicle.

Safe Real-World Autonomous Driving by Learning to Predict and Plan with a Mixture of Experts

This paper proposes modeling a distribution over multiple future trajectories for both the self-driving vehicle and other road agents, using a unified neural network architecture for prediction and planning, and successfully deploys it on a self-drive on urban public roads, proving that it drives safely without compromising comfort.

Driving in Real Life with Inverse Reinforcement Learning

The first learning-based planner to drive a car in dense, urban traffic using Inverse Reinforcement Learning (IRL) is introduced, with a simple design due to only learning the trajectory scoring function, relatively interpretable features, and strong real-world performance.



Conditional Affordance Learning for Driving in Urban Environments

This work proposes a direct perception approach which maps video input to intermediate representations suitable for autonomous navigation in complex urban environments given high-level directional inputs, and is the first to handle traffic lights and speed signs by using image-level labels only.

End to End Learning for Self-Driving Cars

A convolutional neural network is trained to map raw pixels from a single front-facing camera directly to steering commands and it is argued that this will eventually lead to better performance and smaller systems.

Wiggling through complex traffic: Planning trajectories constrained by predictions

A novel approach to planning trajectories for autonomous vehicles by providing a flexible problem description and a trajectory planner without specialization to distinct classes of maneuvers beforehand, capable of considering multiple lanes including the predicted dynamics of other traffic participants, while being real-time capable at the same time.

End-to-End Driving Via Conditional Imitation Learning

This work evaluates different architectures for conditional imitation learning in vision-based driving and conducts experiments in realistic three-dimensional simulations of urban driving and on a 1/5 scale robotic truck that is trained to drive in a residential area.

Baidu Apollo EM Motion Planner

A real-time motion planning system based on the Baidu Apollo (open source) autonomous driving platform that aims to address the industrial level-4 motion planning problem while considering safety, comfort and scalability is introduced.

Search-Based Optimal Motion Planning for Automated Driving

The capability of the algorithm to devise plans both in fast and slow driving conditions, even when full stop is required is demonstrated, and the approach is validated within a simulation study with realistic traffic scenarios.

Parallel Algorithms for Real-time Motion Planning

A novel five-dimensional search space formulation that includes both spatial and temporal dimensions, and respects the kinematic and dynamic constraints on a typical automobile is proposed, which is particularly effective at generating robust merging behaviors.

DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving

This paper proposes to map an input image to a small number of key perception indicators that directly relate to the affordance of a road/traffic state for driving and argues that the direct perception representation provides the right level of abstraction.

Fast and Furious: Real Time End-to-End 3D Detection, Tracking and Motion Forecasting with a Single Convolutional Net

A novel deep neural network that is able to jointly reason about 3D detection, tracking and motion forecasting given data captured by a 3D sensor is proposed, which is very efficient in terms of both memory and computation.

ALVINN: An Autonomous Land Vehicle in a Neural Network

ALVINN (Autonomous Land Vehicle In a Neural Network) is a 3-layer back-propagation network designed for the task of road following that can effectively follow real roads under certain field conditions.