RadSegNet: A Reliable Approach to Radar Camera Fusion

  title={RadSegNet: A Reliable Approach to Radar Camera Fusion},
  author={Kshitiz Bansal and Keshav Rungta and Dinesh Bharadia},
. Perception systems for autonomous driving have seen significant advancements in their performance over last few years. However, these systems struggle to show robustness in extreme weather conditions because sensors like lidars and cameras, which are the primary sensors in a sensor suite, see a decline in performance under these conditions. In order to solve this problem, camera-radar fusion systems provide a unique opportunity for all weather reliable high quality perception. Cameras… 



RADIATE: A Radar Dataset for Automotive Perception

This paper presents the RAdar Dataset In Adverse weather (RADIATE), aiming to facilitate research on object detection, tracking and scene understanding using radar sensing for safe autonomous driving, and is the first public radar dataset which provides high-resolution radar images on public roads with a large amount of road actors labelled.

Automotive Radar Dataset for Deep Learning Based 3D Object Detection

  • M. MeyerG. Kuschk
  • Computer Science, Environmental Science
    2019 16th European Radar Conference (EuRAD)
  • 2019
A radar-centric automotive dataset based on radar, lidar and camera data for the purpose of 3D object detection is presented, and the complete process of generating such a dataset is described.

Pseudo-LiDAR From Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving

This paper proposes to convert image-based depth maps to pseudo-LiDAR representations --- essentially mimicking the LiDAR signal, and achieves impressive improvements over the existing state-of-the-art in image- based performance.

CenterFusion: Center-based Radar and Camera Fusion for 3D Object Detection

  • Ramin NabatiH. Qi
  • Computer Science
    2021 IEEE Winter Conference on Applications of Computer Vision (WACV)
  • 2021
This paper proposes a middle-fusion approach to exploit both radar and camera data for 3D object detection and solves the key data association problem using a novel frustum-based method.

PointPainting: Sequential Fusion for 3D Object Detection

PointPainting is proposed, a sequential fusion method that combines lidar points into the output of an image-only semantic segmentation network and appending the class scores to each point, and how latency can be minimized through pipelining.

Objects as Points

The center point based approach, CenterNet, is end-to-end differentiable, simpler, faster, and more accurate than corresponding bounding box based detectors and performs competitively with sophisticated multi-stage methods and runs in real-time.

Deep Multi-Modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges

This review paper attempts to systematically summarize methodologies and discuss challenges for deep multi-modal object detection and semantic segmentation in autonomous driving with an overview of on-board sensors on test vehicles, open datasets, and background information.

ACDC: The Adverse Conditions Dataset with Correspondences for Semantic Driving Scene Understanding

A detailed empirical study demonstrates the challenges that the adverse domains of ACDC pose to state-of-the-art supervised and unsupervised approaches and indicates the value of the dataset in steering future progress in the field.

Warping of Radar Data Into Camera Image for Cross-Modal Supervision in Automotive Applications

The feasibility of the overall framework for automatic label generation for RD spectra is verified by evaluating the performance of neural networks trained with the proposed framework for Direction-of-Arrival estimation.