RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects

@article{Yang2020RadarNetER,
  title={RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects},
  author={Bin Yang and Runsheng Guo and Mingfeng Liang and Sergio Casas and Raquel Urtasun},
  journal={ArXiv},
  year={2020},
  volume={abs/2007.14366}
}
We tackle the problem of exploiting Radar for perception in the context of self-driving as Radar provides complementary information to other sensors such as LiDAR or cameras in the form of Doppler velocity. The main challenges of using Radar are the noise and measurement ambiguities which have been a struggle for existing simple input or output fusion methods. To better address this, we propose a new solution that exploits both LiDAR and Radar sensors for perception. Our approach, dubbed… Expand
Full-Velocity Radar Returns by Radar-Camera Fusion
TLDR
This paper presents a closed-form solution for the point-wise, full-velocity estimate of Doppler returns using the corresponding optical flow from camera images, and addresses the association problem between radar returns and camera images with a neural network that is trained to estimate radar-camera correspondences. Expand
Radar Voxel Fusion for 3D Object Detection
TLDR
A low-level sensor fusion network for 3D object detection, which fuses lidar, camera, and radar data, and proposes a novel loss to handle the discontinuity of a simple yaw representation for object detection. Expand
Robust Multimodal Vehicle Detection in Foggy Weather Using Complementary Lidar and Radar Signals
Vehicle detection with visual sensors like lidar and camera is one of the critical functions enabling autonomous driving. While they generate fine-grained point clouds or high-resolution images withExpand
Deep Multi-modal Object Detection for Autonomous Driving
TLDR
This paper presents methods that have been proposed in the literature for the different deep multi-modal perception techniques dealing with the combination of radar information with other sensors. Expand
Research of Target Detection and Classification Techniques Using Millimeter-Wave Radar and Vision Sensors
TLDR
A robust object detection and classification algorithm based on millimeter-wave (MMW) radar and camera fusion is proposed, which is up to 89.42% more accurate than the traditional radar signal algorithm and up to 32.76% higher than Faster R-CNN, especially in the environment of low light and strong electromagnetic clutter. Expand
Multi-Modal 3D Object Detection in Autonomous Driving: a Survey
TLDR
This survey devotes to review recent fusion-based perception research to bridge the gap and motivate future research on multi-sensor fusion based perception. Expand
An Application-Driven Conceptualization of Corner Cases for Perception in Highly Automated Driving
TLDR
An application-driven view of corner cases in highly automated driving is provided, and an exemplary toolchain for data acquisition and processing is described, highlighting the interfaces of the corner case detection. Expand
Accurate 3D Object Detection using Energy-Based Models
TLDR
This work designs a differentiable pooling operator for 3D bounding boxes, serving as the core module of the EBM network, and integrates this general approach into the state-of-the-art 3D object detector SA-SSD. Expand
Traffic Flow Parameters Collection under Variable Illumination Based on Data Fusion
  • Shaojie Jin, Ying Gao, Shoucai Jing, F. Hui, Xiangmo Zhao, Jianzhen Liu
  • Computer Science
  • Journal of Advanced Transportation
  • 2021
TLDR
A fusion technique combining millimeter-wave radar data with image data to compensate for the lack of image-based vehicle detection under complicated lighting to complete all-day parameters collection and reduce the time-consuming postcalculation of traffic flow parameters collection is proposed. Expand
Uncertainty-Aware Vehicle Orientation Estimation for Joint Detection-Prediction Models
TLDR
This work presents a method that extends the existing models that perform joint object detection and motion prediction, allowing for improved motion prediction and safer autonomous operations and shows the benefits of the approach, obtaining state-of-the-art performance on the open-sourced nuScenes data set. Expand
...
1
2
...

References

SHOWING 1-10 OF 51 REFERENCES
2D Car Detection in Radar Data with PointNets
TLDR
This work presents an approach to detect 2D objects solely depending on sparse radar data using PointNets, which facilitates a classification together with a bounding box estimation of objects using a single radar sensor. Expand
A Deep Learning-based Radar and Camera Sensor Fusion Architecture for Object Detection
TLDR
The proposed CameraRadarFusion Net (CRF-Net) automatically learns at which level the fusion of the sensor data is most beneficial for the detection result, and is able to outperform a state-of-the-art image-only network for two different datasets. Expand
Radar and Camera Early Fusion for Vehicle Detection in Advanced Driver Assistance Systems
Perception module is at the heart of modern Advanced Driver Assistance Systems (ADAS). To improve the quality and robustness of this module, especially in the presence of environmental noises such asExpand
Radar/Lidar sensor fusion for car-following on highways
TLDR
A real-time algorithm is presented which enables an autonomous car to comfortably follow other cars at various speeds while keeping a safe distance and a velocity and distance regulation approach is presented that depends on the position as well as the velocity of the followed car. Expand
Real Time Lidar and Radar High-Level Fusion for Obstacle Detection and Tracking with evaluation on a ground truth
TLDR
A real-time Lidar/Radar data fusion algorithm for obstacle detection and tracking based on the global nearest neighbour standard filter (GNN) is proposed and embedded in an automative vehicle as a component generated by a real- time multisensor software. Expand
nuScenes: A Multimodal Dataset for Autonomous Driving
Robust detection and tracking of objects is crucial for the deployment of autonomous vehicle technology. Image based benchmark datasets have driven development in computer vision tasks such as objectExpand
Deep Learning-based Object Classification on Automotive Radar Spectra
TLDR
This work proposes to apply deep Convolutional Neural Networks (CNNs) directly to regions-of-interest (ROI) in the radar spectrum and thereby achieve an accurate classification of different objects in a scene. Expand
A multi-sensor fusion system for moving object detection and tracking in urban driving environments
TLDR
This paper presents a new, moving object detection and tracking system that extends and improves the earlier system used for the 2007 DARPA Urban Challenge, and introduces a vision sensor. Expand
Semantic radar grids
TLDR
This paper shows that semantic knowledge can be obtained from a radar grid by classifying the containing objects on the cell level, which allows for omitting a prior object extraction and results directly in a semantic radar grid. Expand
LaserNet: An Efficient Probabilistic 3D Object Detector for Autonomous Driving
TLDR
benchmark results show that this approach has significantly lower runtime than other recent detectors and that it achieves state-of-the-art performance when compared on a large dataset that has enough data to overcome the challenges of training on the range view. Expand
...
1
2
3
4
5
...