COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked Vehicles

  title={COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked Vehicles},
  author={Jiaxun Cui and Hang Qiu and Dian Chen and Peter Stone and Yuke Zhu},
Optical sensors and learning algorithms for autonomous vehicles have dramatically advanced in the past few years. Nonetheless, the reliability of today’s autonomous vehicles is hindered by the limited line-of-sight sensing capability and the brittleness of data-driven methods in handling extreme situations. With recent developments of telecommunication technologies, cooperative perception with vehicle-to-vehicle communications has become a promising paradigm to enhance autonomous driving in… 
1 Citations

Figures and Tables from this paper

Latency-Aware Collaborative Perception
Experiments results show that the proposed latency aware collaborative perception system with SyncNet can outperforms the state-of-the-art collaborative perception method by 15.6% in the communication latency scenario and keep collaborative perception being superior to single agent perception under severe latency.


Cooperative Perception with Deep Reinforcement Learning for Connected Vehicles
A cooperative perception scheme with deep reinforcement learning to enhance the detection accuracy for the surrounding objects and mitigates the network load in vehicular networks and enhances the communication reliability.
AutoCast: scalable infrastructure-less cooperative perception for distributed collaborative driving
Extensive evaluation results under different scenarios show that, unlike competing approaches, AUTOCAST can avoid crashes and near-misses which occur frequently without cooperative perception, its performance scales gracefully in dense traffic scenarios, its transmission schedules can be completed on the real radio testbed, and its scheduling algorithm is near-optimal with negligible computation overhead.
OPV2V: An Open Benchmark Dataset and Fusion Pipeline for Perception with Vehicle-to-Vehicle Communication
This work presents the first large-scale open simulated dataset for Vehicle-to-Vehicle perception, and proposes a new Attentive Intermediate Fusion pipeline to aggregate information from multiple connected vehicles.
AVR: Augmented Vehicular Reality
It is shown that AVR is feasible using off-the-shelf wireless technologies, and it can qualitatively change the decisions made by autonomous vehicle path planning algorithms.
Multi-Modal Fusion Transformer for End-to-End Autonomous Driving
This work demonstrates that imitation learning policies based on existing sensor fusion methods under-perform in the presence of a high density of dynamic agents and complex scenarios, which require global contextual reasoning, and proposes TransFuser, a novel Multi-Modal Fusion Transformer to integrate image and LiDAR representations using attention.
EMP: edge-assisted multi-vehicle perception
The core methodological contribution is to make the sensor data sharing scalable, adaptive, and resource-efficient over oftentimes highly fluctuating wireless links through a series of novel algorithms, which are then integrated into a full-fledged cooperative sensing pipeline.
The Impact of Cooperative Perception on Decision Making and Planning of Autonomous Vehicles
An on-road sensing system to provide a see-through/lifted-seat/satellite view to drivers and how the extended perception capability can contribute to situation awareness on the road is presented and methods for safer and smoother autonomous driving using the augmented situation awareness and perception capability are provided.
V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and Prediction
This paper explores the use of vehicle-to-vehicle (V2V) communication to improve the perception and motion forecasting performance of self-driving vehicles and shows that the approach of sending compressed deep feature map activations achieves high accuracy while satisfying communication bandwidth requirements.
Cooper: Cooperative Perception for Connected Autonomous Vehicles Based on 3D Point Clouds
This work is the first to conduct a study on raw-data level cooperative perception for enhancing the detection ability of self-driving systems and demonstrates it is possible to transmit point clouds data for cooperative perception via existing vehicular network technologies.
Conditional Affordance Learning for Driving in Urban Environments
This work proposes a direct perception approach which maps video input to intermediate representations suitable for autonomous navigation in complex urban environments given high-level directional inputs, and is the first to handle traffic lights and speed signs by using image-level labels only.