• Corpus ID: 214612137

Adversarial Attacks on Monocular Depth Estimation

@article{Zhang2020AdversarialAO,
  title={Adversarial Attacks on Monocular Depth Estimation},
  author={Ziqi Zhang and Xinge Zhu and Yingwei Li and Xiangqun Chen and Yao Guo},
  journal={ArXiv},
  year={2020},
  volume={abs/2003.10315}
}
Recent advances of deep learning have brought exceptional performance on many computer vision tasks such as semantic segmentation and depth estimation. However, the vulnerability of deep neural networks towards adversarial examples have caused grave concerns for real-world deployment. In this paper, we present to the best of our knowledge the first systematic study of adversarial attacks on monocular depth estimation, an important task of 3D scene understanding in scenarios such as autonomous… 

Figures and Tables from this paper

Defending Against Localized Adversarial Attacks on Edge-Deployed Monocular Depth Estimators
TLDR
This work proposes the first defense mechanism against adversarial patches for a regression network, in the context of Monocular Depth Estimation on an edge device, maintaining performance on clean images while also achieving near clean image levels of performance on adversarial inputs.
Monocular Depth Estimators: Vulnerabilities and Attacks
TLDR
The robustness of the most state-of-the-art monocular depth estimation networks against adversarial attacks is investigated and a novel deep feature annihilation loss is introduced that corrupts the hidden feature space representation forcing the decoder of the network to output poor depth maps.
Adversarial Patch Attacks on Monocular Depth Estimation Networks
TLDR
This work generates artificial patterns that can fool the target methods into estimating an incorrect depth for the regions where the patterns are placed, and analyzes the behavior of monocular depth estimation under attacks by visualizing the activation levels of the intermediate layers and the regions potentially affected by the adversarial attack.
Black-box Adversarial Attacks on Monocular Depth Estimation Using Evolutionary Multi-objective Optimization
TLDR
An adversarial attack method to deep neural networks (DNNs) for monocular depth estimation, i.e., estimating the depth from a single image, that succeeds in attacking two DNN-based methods trained with indoor and outdoor scenes.
Evaluating Adversarial Attacks on Driving Safety in Vision-Based Autonomous Vehicles
TLDR
This article investigates the impact of two primary types of adversarial attacks, perturbation attacks, and patch attacks, on the driving safety of vision-based autonomous vehicles rather than the detection precision of deep learning models and proposes an end-to-end evaluation framework with a set of driving safety performance metrics.
Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses
TLDR
It is shown that a simple universal perturbation can fool a series of state-of-the-art defenses, and it is verified that regionally homogeneous perturbations can well transfer across different vision tasks.
Enhancing Transferability of Black-Box Adversarial Attacks via Lifelong Learning for Speech Emotion Recognition Models
TLDR
This work proposes a method to improve the transferability of black-box adversarial attacks using lifelong learning, and uses the atrous Convolutional Neural Network model, which enables multi-task sequential learning, which saves more memory space than conventional multi- task learning.

References

SHOWING 1-10 OF 65 REFERENCES
Analysis of Deep Networks for Monocular Depth Estimation Through Adversarial Attacks with Proposal of a Defense Method
TLDR
It is shown that the attacks can be defended by using a saliency map predicted by a CNN trained to be robust to the attacks, providing an effective defense method as well as a clue to understanding the computational mechanism of CNNs for MDE.
On the Robustness of Semantic Segmentation Models to Adversarial Attacks
TLDR
This paper presents what to their knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets and shows how mean-field inference in deep structured models and multiscale processing naturally implement recently proposed adversarial defenses.
Improving Transferability of Adversarial Examples With Input Diversity
TLDR
This work proposes to improve the transferability of adversarial examples by creating diverse input patterns by applying random transformations to the input images at each iteration, and shows that the proposed attack method can generate adversarialExamples that transfer much better to different networks than existing baselines.
Universal Adversarial Perturbations Against Semantic Image Segmentation
TLDR
This work presents an approach for generating (universal) adversarial perturbations that make the network yield a desired target segmentation as output and shows empirically that there exist barely perceptible universal noise patterns which result in nearly the same predicted segmentation for arbitrary inputs.
ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models
TLDR
An effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN is proposed, sparing the need for training substitute models and avoiding the loss in attack transferability.
Practical Black-Box Attacks on Deep Neural Networks Using Efficient Query Mechanisms
TLDR
Novel Gradient Estimation black-box attacks for adversaries with query access to the target model’s class probabilities, which do not rely on transferability and decouple the number of queries required to generate each adversarial sample from the dimensionality of the input are proposed.
Transferable Adversarial Perturbations
TLDR
It is shown that maximizing distance between natural images and their adversarial examples in the intermediate feature maps can improve both white-box attacks (with knowledge of the model parameters) and black- box attacks and smooth regularization on adversarial perturbations enables transferring across models.
Fast Feature Fool: A data independent approach to universal adversarial perturbations
TLDR
This paper proposes a novel data independent approach to generate image agnostic perturbations for a range of CNNs trained for object recognition and shows that these perturbation are transferable across multiple network architectures trained either on same or different data.
Learning to Attack: Adversarial Transformation Networks
TLDR
It is demonstrated that a separate network can be trained to efficiently attack another fully trained network, and that the generated attacks yield startling insights into the weaknesses of the target network.
Boosting Adversarial Attacks with Momentum
TLDR
A broad class of momentum-based iterative algorithms to boost adversarial attacks by integrating the momentum term into the iterative process for attacks, which can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples.
...
1
2
3
4
5
...