• Corpus ID: 221340958

GhostBuster: Looking Into Shadows to Detect Ghost Objects in Autonomous Vehicle 3D Sensing

@article{Hau2020GhostBusterLI,
  title={GhostBuster: Looking Into Shadows to Detect Ghost Objects in Autonomous Vehicle 3D Sensing},
  author={Zhongyuan Hau and Soteris Demetriou and Luis Mu{\~n}oz-Gonz{\'a}lez and Emil C. Lupu},
  journal={ArXiv},
  year={2020},
  volume={abs/2008.12008}
}
LiDAR-driven 3D sensing allows new generations of vehicles to achieve advanced levels of situation awareness. However, recent works have demonstrated that physical adversaries can spoof LiDAR return signals and deceive 3D object detectors to erroneously detect "ghost" objects. In this work, we introduce GhostBuster, a set of new techniques embodied in an end-to-end prototype to detect ghost object attacks on 3D detectors. GhostBuster is agnostic of the 3D detector targeted, and only uses LiDAR… 

Security Analysis of Camera-LiDAR Semantic-Level Fusion Against Black-Box Attacks on Autonomous Vehicles

This work performs the first analysis of camera-LiDAR fusion under spoofing attacks and the first security analysis of semantic fusion in any AV context and is the first to analyze the longitudinal impact of perception attacks by showing the impact of multi-frame attacks.

Real-time Detection of Practical Universal Adversarial Perturbations

HyperNeuron is able to simultaneously detect both adversarial mask and patch UAPs with comparable or better performance than existing UAP defenses whilst introducing a significantly reduced latency of only 0.86 milliseconds per image, suggesting that many realistic and practical universal attacks can be reliably mitigated in real-time, which shows promise for the robust deployment of machine learning systems.

Jacobian Regularization for Mitigating Universal Adversarial Perturbations

It is empirically verified that Jacobian regularization greatly increases model robustness to UAPs by up to four times whilst maintaining clean performance, which suggests that realistic and practical universal attacks can be reliably mitigated without sacrificing clean accuracy, which shows promise for the robustness of machine learning systems.

Robustness and Transferability of Universal Attacks on Compressed Models

It is observed that UAP transfer attacks between pruned and full models are limited, suggesting that the systemic vulnerabilities across these models are different, and this finding has practical implications as using different compression techniques can blunt the effectiveness of black-box transfer attacks.

Effects of perturbed depth sensors in autonomous ground vehicles

Cybersecurity of autonomous vehicles is a pertinent concern both for defense and also civilian systems. From self-driving cars to autonomous Navy vessels, malfunctions can have devastating

References

SHOWING 1-10 OF 54 REFERENCES

PIXOR: Real-time 3D Object Detection from Point Clouds

PIXOR is proposed, a proposal-free, single-stage detector that outputs oriented 3D object estimates decoded from pixel-wise neural network predictions that surpasses other state-of-the-art methods notably in terms of Average Precision (AP), while still runs at 10 FPS.

Towards Robust LiDAR-based Perception in Autonomous Driving: General Black-box Adversarial Sensor Attack and Countermeasures

This work discovers that the ignored occlusion patterns in LiDAR point clouds make self-driving cars vulnerable to spoofing attacks, and constructs the first black-box spoofing attack based on this vulnerability, and proposes SVF that embeds the neglected physical features into end-to-end learning.

Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving

This work performs the first security study of LiDAR-based perception in AV settings, and designs an algorithm that combines optimization and global sampling, which improves the attack success rates to around 75%.

Remote Attacks on Automated Vehicles Sensors : Experiments on Camera and LiDAR

This paper presents remote attacks on camera-based system and LiDAR using commodity hardware and proposes software and hardware countermeasures that improve sensors resilience against these attacks.

CoDrive: Improving Automobile Positioning via Collaborative Driving

CoDrive, a system to provide a sensor-rich car's accuracy to a legacy car, is proposed which is shown to achieve a 90% and a 30% reduction in cumulative GPS error for legacy and sensor- rich cars respectively, while preserving the shape of the traffic.

Detecting Ground Shadows in Outdoor Consumer Photographs

The key hypothesis is that the types of materials constituting the ground in outdoor scenes is relatively limited, most commonly including asphalt, brick, stone, mud, grass, concrete, etc, so the appearances of shadows on the ground are not as widely varying as general shadows and thus, can be learned from a labelled set of images.

Illusion and Dazzle: Adversarial Optical Channel Exploits Against Lidars for Automotive Applications

A spoofing by relaying attack is presented, which can not only induce illusions in the lidar output but can also cause the illusions to appear closer than the location of a spoofing device.

DISOCCLUSION OF 3D LIDAR POINT CLOUDS USING RANGE IMAGES

This work promotes an alternative approach by using a 2D range image representation of the 3D point cloud, taking advantage of the fact that the problem of disocclusion has been intensively studied in the 2D image processing community over the past decade.

Vision meets robotics: The KITTI dataset

A novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research, using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras and a high-precision GPS/IMU inertial navigation system.

A survey of cast shadow detection algorithms

...