HydraFusion: Context-Aware Selective Sensor Fusion for Robust and Efficient Autonomous Vehicle Perception

@article{Malawade2022HydraFusionCS,
  title={HydraFusion: Context-Aware Selective Sensor Fusion for Robust and Efficient Autonomous Vehicle Perception},
  author={Arnav Vaibhav Malawade and Trier Mortlock and Mohammad Abdullah Al Faruque},
  journal={2022 ACM/IEEE 13th International Conference on Cyber-Physical Systems (ICCPS)},
  year={2022},
  pages={68-79}
}
Although autonomous vehicles (AVs) are expected to revolutionize transportation, robust perception across a wide range of driving contexts remains a significant challenge. Techniques to fuse sensor data from camera, radar, and lidar sensors have been proposed to improve AV perception. However, existing methods are insufficiently robust in difficult driving contexts (e.g., bad weather, low light, sensor obstruction) due to rigidity in their fusion implementations. These methods fall into two… 

Figures and Tables from this paper

EcoFusion: energy-aware adaptive sensor fusion for efficient autonomous vehicle perception

TLDR
EcoFusion is an energy-aware sensor fusion approach that uses context to adapt the fusion method and reduce energy consumption without affecting perception performance and presents scenario-specific results.

Towards Deep Radar Perception for Autonomous Driving: Datasets, Methods, and Challenges

TLDR
A big picture of the deep radar perception stack is provided, including signal processing, datasets, labelling, data augmentation, and downstream tasks such as depth and velocity estimation, object detection, and sensor fusion.

Romanus: Robust Task Offloading in Modular Multi-Sensor Autonomous Driving Systems

TLDR
ROMANUS, a methodology for robust and efficient task offloading for modular ADS platforms with multi-sensor processing pipelines, and the implementation of a runtime solution based on Deep Reinforcement Learning to adapt the operating mode according to variations in the perceived road scene complexity, network connectivity, and server load.

SELF-CARE: Selective Fusion with Context-Aware Low-Power Edge Computing for Stress Detection

TLDR
This work proposes SELF-CARE, a fully wrist-based method for stress detection that employs context-aware selective sensor fusion that dynamically adapts based on data from the sensors, improving performance while maintaining energy efficiency.

References

SHOWING 1-10 OF 31 REFERENCES

SelectFusion: A Generic Framework to Selectively Learn Multisensory Fusion

TLDR
This work proposes SelectFusion, an end-to-end selective sensor fusion module which can be applied to useful pairs of sensor modalities such as monocular images and inertial measurements, depth images and LIDAR point clouds, and investigates the effectiveness of the different fusion strategies in attending the most reliable features.

Selective Sensor Fusion for Neural Visual-Inertial Odometry

TLDR
A novel end-to-end selective sensor fusion framework for monocular VIO is proposed, which fuses monocular images and inertial measurements in order to estimate the trajectory whilst improving robustness to real-life issues, such as missing and corrupted data or bad sensor synchronization.

Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles

TLDR
A new hybrid multi-sensor fusion pipeline configuration that performs environment perception for autonomous vehicles such as road segmentation, obstacle detection, and tracking using a proposed encoder-decoder based Fully Convolutional Neural Network and a traditional Extended Kalman Filter nonlinear state estimator method.

A Deep Learning-based Radar and Camera Sensor Fusion Architecture for Object Detection

TLDR
The proposed CameraRadarFusion Net (CRF-Net) automatically learns at which level the fusion of the sensor data is most beneficial for the detection result, and is able to outperform a state-of-the-art image-only network for two different datasets.

FEEL: Fast, Energy-Efficient Localization for Autonomous Indoor Vehicles

TLDR
FEEL is proposed – an indoor localization system that uses a fusion of three low-energy sensors: IMU, UWB, and radar that provides a localization accuracy of sub-7 cm with an ultra-low latency of around 3 ms and yields up to 20% energy savings with only a marginal trade off in accuracy.

RADIATE: A Radar Dataset for Automotive Perception

TLDR
This paper presents the RAdar Dataset In Adverse weather (RADIATE), aiming to facilitate research on object detection, tracking and scene understanding using radar sensing for safe autonomous driving, and is the first public radar dataset which provides high-resolution radar images on public roads with a large amount of road actors labelled.

A Review and Comparative Study on Probabilistic Object Detection in Autonomous Driving

TLDR
An overview of practical uncertainty estimation methods in deep learning is provided, and a strict comparative study for probabilistic object detection based on an image detector and three public autonomous driving datasets is presented.

PointFusion: Deep Sensor Fusion for 3D Bounding Box Estimation

TLDR
This work evaluates PointFusion on two distinctive datasets: the KITTI dataset that features driving scenes captured with a lidar-camera setup, and the SUN-RGBD dataset that captures indoor environments with RGB-D cameras.

A Systematic Review of Perception System and Simulators for Autonomous Vehicles Research

TLDR
The physical fundamentals, principle functioning, and electromagnetic spectrum used to operate the most common sensors used in perception systems (ultrasonic, RADAR, LiDAR, cameras, IMU, GNSS, RTK, etc.) are presented.

Accuracy–Power Controllable LiDAR Sensor System with 3D Object Recognition for Autonomous Vehicle

TLDR
This paper proposes algorithms to improve the inefficient power consumption of conventional LiDAR sensors, and efficiently reduce power consumption in two ways: controlling the HAR to vary the laser transmission period (TP) of a laser diode (LD) depending on the vehicle’s speed and reducing the static power consumption using a sleep mode,depending on the surrounding environment.