RODNet: A Real-Time Radar Object Detection Network Cross-Supervised by Camera-Radar Fused Object 3D Localization

@article{Wang2021RODNetAR,
  title={RODNet: A Real-Time Radar Object Detection Network Cross-Supervised by Camera-Radar Fused Object 3D Localization},
  author={Yizhou Wang and Zhongyu Jiang and Yudong Li and Jenq-Neng Hwang and Guanbin Xing and Hui Liu},
  journal={IEEE Journal of Selected Topics in Signal Processing},
  year={2021},
  volume={15},
  pages={954-967}
}
Various autonomous or assisted driving strategies have been facilitated through the accurate and reliable perception of the environment around a vehicle. Among the commonly used sensors, radar has usually been considered as a robust and cost-effective solution even in adverse driving scenarios, e.g., weak/strong lighting or bad weather. Instead of considering fusing the unreliable information from all available sensors, perception from pure radar data becomes a valuable alternative that is… 
Rethinking of Radar’s Role: A Camera-Radar Dataset and Systematic Annotator via Coordinate Alignment
TLDR
A new dataset, named CRUW, with a systematic annotator and performance evaluation system to address the radar object detection (ROD) task, which aims to classify and localize the objects in 3D purely from radar’s radio frequency (RF) images.
Multi-View Radar Semantic Segmentation
TLDR
This work proposes several novel architectures, and their associated losses, which analyse multiple "views" of the range-angle-Doppler radar tensor to segment it semantically, and demonstrates that the best model outperforms alternative models, derived either from the semantic segmentation of natural images or from radar scene understanding, while requiring significantly fewer parameters.
Radar Transformer: An Object Classification Network Based on 4D MMW Imaging Radar
TLDR
An object classification network named Radar Transformer is proposed that takes the attention mechanism as the core and adopts the combination of vector attention and scalar attention to make full use of the spatial information, Doppler information, and reflection intensity information of the radar point cloud to realize the deep fusion of local attention features and global attention features.
DANet: Dimension Apart Network for Radar Object Detection
TLDR
This paper proposes a multi-scale U-Net style network architecture termed DANet for radar object detection task and achieves superior performance on the radar detection task at much less computational cost, compared to previous pioneer works.
ROD2021 Challenge: A Summary for Radar Object Detection Challenge for Autonomous Driving Applications
TLDR
The ROD2021 Challenge is the first public benchmark focusing on this topic, which attracts great attention and participation and adds strong value and a better understanding of the radar object detection task for the autonomous vehicle community.
Efficient-ROD: Efficient Radar Object Detection based on Densely Connected Residual Network
TLDR
A lightweight, computationally efficient, and effective network architecture to conquer the trade-off between computational efficiency and performance of radar object detection tasks is proposed.
Look, Radiate, and Learn: Self-supervised Localisation via Radio-Visual Correspondence
TLDR
Results indicate that accurate radio target localisation can be automatically learned from paired radio-visual data without labels, which opens the door for vast data scalability and may prove key to realising the promise of robust radio sensing atop a unified perception-communication cellular infrastructure.
Cross-modal Learning of Graph Representations using Radar Point Cloud for Long-Range Gesture Recognition
TLDR
A novel architecture for a long-range (1m - 2m) gesture recognition solution that leverages a point cloud-based cross-learning approach from camera point cloud to 60-GHz FMCW radar point cloud, which allows learning better representations while suppressing noise.
End-to-end system for object detection from sub-sampled radar data
TLDR
This paper presents an end-to-end signal processing pipeline that relies on subsampled radar data to perform object detection in vehicular settings and shows robust detection based on radar data reconstructed using 20% of samples under extreme weather conditions such as snow or fog, and on low-illuminated nights.
Improving Uncertainty of Deep Learning-based Object Classification on Radar Spectra using Label Smoothing
TLDR
This article exploits radar-specific know-how to define soft labels which encourage the classifiers to learn to output high-quality calibrated uncertainty estimates, thereby partially resolving the problem of over-confidence.
...
...

References

SHOWING 1-10 OF 67 REFERENCES
RODNet: Radar Object Detection using Cross-Modal Supervision
TLDR
A deep radar object detection network (RODNet), to effectively detect objects purely from the carefully processed radar frequency data in the format of range-azimuth frequency heatmaps (RAMaps) using a novel camera-radar fusion (CRF) strategy.
RODNet: Object Detection under Severe Conditions Using Vision-Radio Cross-Modal Supervision
TLDR
This paper proposes a radio object detection network (RODNet) to detect objects purely from the processed radar data in the format of range-azimuth frequency heatmaps (RAMaps), and introduces a cross-modal supervision framework to train the RODNet.
Vehicle Detection With Automotive Radar Using Deep Learning on Range-Azimuth-Doppler Tensors
TLDR
This paper demonstrates a deep-learning-based vehicle detection solution which operates on the image-like tensor instead of the point cloud resulted by peak detection, and is the first to implement such a system.
CNN Based Road User Detection Using the 3D Radar Cube
TLDR
This letter presents a novel radar based, single-frame, multi-class detection method for moving road users (pedestrian, cyclist, car), which utilizes low-level radar cube data and demonstrates that the method outperforms the state-of-the-art methods both target- and object-wise.
Deep Learning-based Object Classification on Automotive Radar Spectra
TLDR
This work proposes to apply deep Convolutional Neural Networks (CNNs) directly to regions-of-interest (ROI) in the radar spectrum and thereby achieve an accurate classification of different objects in a scene.
Automotive Radar Dataset for Deep Learning Based 3D Object Detection
  • M. Meyer, G. Kuschk
  • Computer Science, Environmental Science
    2019 16th European Radar Conference (EuRAD)
  • 2019
TLDR
A radar-centric automotive dataset based on radar, lidar and camera data for the purpose of 3D object detection is presented, and the complete process of generating such a dataset is described.
The Oxford Radar RobotCar Dataset: A Radar Extension to the Oxford RobotCar Dataset
TLDR
The target application is autonomous vehicles where this modality is robust to environmental conditions such as fog, rain, snow, or lens flare, which typically challenge other sensor modalities such as vision and LIDAR.
nuScenes: A Multimodal Dataset for Autonomous Driving
Robust detection and tracking of objects is crucial for the deployment of autonomous vehicle technology. Image based benchmark datasets have driven development in computer vision tasks such as object
The Earth Ain't Flat: Monocular Reconstruction of Vehicles on Steep and Graded Roads from a Moving Camera
TLDR
The proposed approach significantly improves the state-of-the-art for monocular object localization on arbitrarily-shaped roads and transfers from synthetic to real data, without any hyperparameter-/fine-tuning.
Seeing Around Street Corners: Non-Line-of-Sight Detection and Tracking In-the-Wild Using Doppler Radar
TLDR
To untangle noisy indirect and direct reflections, temporal sequences of Doppler velocity and position measurements are learned and fuse in a joint NLOS detection and tracking network over time, which is validated on in-the-wild automotive scenes and demonstrated in dynamic automotive environments.
...
...