Adversarial Attacks on Monocular Depth Estimation
@article{Zhang2020AdversarialAO, title={Adversarial Attacks on Monocular Depth Estimation}, author={Ziqi Zhang and Xinge Zhu and Yingwei Li and Xiangqun Chen and Yao Guo}, journal={ArXiv}, year={2020}, volume={abs/2003.10315} }
Recent advances of deep learning have brought exceptional performance on many computer vision tasks such as semantic segmentation and depth estimation. However, the vulnerability of deep neural networks towards adversarial examples have caused grave concerns for real-world deployment. In this paper, we present to the best of our knowledge the first systematic study of adversarial attacks on monocular depth estimation, an important task of 3D scene understanding in scenarios such as autonomous…
7 Citations
Defending Against Localized Adversarial Attacks on Edge-Deployed Monocular Depth Estimators
- Computer Science2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA)
- 2020
This work proposes the first defense mechanism against adversarial patches for a regression network, in the context of Monocular Depth Estimation on an edge device, maintaining performance on clean images while also achieving near clean image levels of performance on adversarial inputs.
Monocular Depth Estimators: Vulnerabilities and Attacks
- Computer ScienceArXiv
- 2020
The robustness of the most state-of-the-art monocular depth estimation networks against adversarial attacks is investigated and a novel deep feature annihilation loss is introduced that corrupts the hidden feature space representation forcing the decoder of the network to output poor depth maps.
Adversarial Patch Attacks on Monocular Depth Estimation Networks
- Computer ScienceIEEE Access
- 2020
This work generates artificial patterns that can fool the target methods into estimating an incorrect depth for the regions where the patterns are placed, and analyzes the behavior of monocular depth estimation under attacks by visualizing the activation levels of the intermediate layers and the regions potentially affected by the adversarial attack.
Black-box Adversarial Attacks on Monocular Depth Estimation Using Evolutionary Multi-objective Optimization
- Computer Science2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC)
- 2021
An adversarial attack method to deep neural networks (DNNs) for monocular depth estimation, i.e., estimating the depth from a single image, that succeeds in attacking two DNN-based methods trained with indoor and outdoor scenes.
Evaluating Adversarial Attacks on Driving Safety in Vision-Based Autonomous Vehicles
- Computer ScienceIEEE Internet of Things Journal
- 2022
This article investigates the impact of two primary types of adversarial attacks, perturbation attacks, and patch attacks, on the driving safety of vision-based autonomous vehicles rather than the detection precision of deep learning models and proposes an end-to-end evaluation framework with a set of driving safety performance metrics.
Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses
- Computer ScienceECCV
- 2020
It is shown that a simple universal perturbation can fool a series of state-of-the-art defenses, and it is verified that regionally homogeneous perturbations can well transfer across different vision tasks.
Enhancing Transferability of Black-Box Adversarial Attacks via Lifelong Learning for Speech Emotion Recognition Models
- Computer ScienceINTERSPEECH
- 2020
This work proposes a method to improve the transferability of black-box adversarial attacks using lifelong learning, and uses the atrous Convolutional Neural Network model, which enables multi-task sequential learning, which saves more memory space than conventional multi- task learning.
References
SHOWING 1-10 OF 65 REFERENCES
Analysis of Deep Networks for Monocular Depth Estimation Through Adversarial Attacks with Proposal of a Defense Method
- Computer ScienceArXiv
- 2019
It is shown that the attacks can be defended by using a saliency map predicted by a CNN trained to be robust to the attacks, providing an effective defense method as well as a clue to understanding the computational mechanism of CNNs for MDE.
On the Robustness of Semantic Segmentation Models to Adversarial Attacks
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
This paper presents what to their knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets and shows how mean-field inference in deep structured models and multiscale processing naturally implement recently proposed adversarial defenses.
Improving Transferability of Adversarial Examples With Input Diversity
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
This work proposes to improve the transferability of adversarial examples by creating diverse input patterns by applying random transformations to the input images at each iteration, and shows that the proposed attack method can generate adversarialExamples that transfer much better to different networks than existing baselines.
Universal Adversarial Perturbations Against Semantic Image Segmentation
- Computer Science2017 IEEE International Conference on Computer Vision (ICCV)
- 2017
This work presents an approach for generating (universal) adversarial perturbations that make the network yield a desired target segmentation as output and shows empirically that there exist barely perceptible universal noise patterns which result in nearly the same predicted segmentation for arbitrary inputs.
ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models
- Computer ScienceAISec@CCS
- 2017
An effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN is proposed, sparing the need for training substitute models and avoiding the loss in attack transferability.
Practical Black-Box Attacks on Deep Neural Networks Using Efficient Query Mechanisms
- Computer ScienceECCV
- 2018
Novel Gradient Estimation black-box attacks for adversaries with query access to the target model’s class probabilities, which do not rely on transferability and decouple the number of queries required to generate each adversarial sample from the dimensionality of the input are proposed.
Transferable Adversarial Perturbations
- Computer ScienceECCV
- 2018
It is shown that maximizing distance between natural images and their adversarial examples in the intermediate feature maps can improve both white-box attacks (with knowledge of the model parameters) and black- box attacks and smooth regularization on adversarial perturbations enables transferring across models.
Fast Feature Fool: A data independent approach to universal adversarial perturbations
- Computer ScienceBMVC
- 2017
This paper proposes a novel data independent approach to generate image agnostic perturbations for a range of CNNs trained for object recognition and shows that these perturbation are transferable across multiple network architectures trained either on same or different data.
Learning to Attack: Adversarial Transformation Networks
- Computer ScienceAAAI
- 2018
It is demonstrated that a separate network can be trained to efficiently attack another fully trained network, and that the generated attacks yield startling insights into the weaknesses of the target network.
Boosting Adversarial Attacks with Momentum
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
A broad class of momentum-based iterative algorithms to boost adversarial attacks by integrating the momentum term into the iterative process for attacks, which can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples.