From a Fourier-Domain Perspective on Adversarial Examples to a Wiener Filter Defense for Semantic Segmentation

@article{Kapoor2021FromAF,
  title={From a Fourier-Domain Perspective on Adversarial Examples to a Wiener Filter Defense for Semantic Segmentation},
  author={Nikhil Kapoor and Andreas B{\"a}r and Serin Varghese and Jan David Schneider and Fabian H{\"u}ger and Peter Schlicht and Tim Fingscheidt},
  journal={2021 International Joint Conference on Neural Networks (IJCNN)},
  year={2021},
  pages={1-8}
}
Despite recent advancements, deep neural networks are not robust against adversarial perturbations. Many of the proposed adversarial defense approaches use computationally expensive training mechanisms that do not scale to complex real-world tasks such as semantic segmentation, and offer only marginal improvements. In addition, fundamental questions on the nature of adversarial perturbations and their relation to the network architecture are largely understudied. In this work, we study the… 

SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation Robustness

A convergence analysis is provided to show the proposed SegPGD can create more effective adversarial examples than PGD under the same number of attack iterations, and to apply it as the underlying attack method for segmentation adversarial training.

Performance Prediction for Semantic Segmentation by a Self-Supervised Image Reconstruction Decoder

This paper proposes a novel per-image performance prediction for semantic segmentation with no need for additional sensors, sensors, or additional training data, and demonstrates its effectiveness with a new state-of-the-art benchmark both on KITTI and Cityscapes for image-only input methods.

How Does Frequency Bias Affect the Robustness of Neural Image Classifiers against Common Corruption and Adversarial Perturbations?

This work proposes Jacobian frequency regularization for models' Jacobians to have a larger ratio of low-frequency components and shows that biasing classifiers towards low (high)-frequency components can bring performance gain against high (low)-frequency corruption and adversarial perturbation, albeit with a tradeoff in performance for low ( high-frequency corruption.

Detecting Backdoored Neural Networks with Structured Adversarial Attacks

Detecting Backdoored Neural Networks with Structured Adversarial Attacks with structured attacks is NP-complete and scalable to TSPs.

References

SHOWING 1-10 OF 68 REFERENCES

Universal Adversarial Perturbations Against Semantic Image Segmentation

This work presents an approach for generating (universal) adversarial perturbations that make the network yield a desired target segmentation as output and shows empirically that there exist barely perceptible universal noise patterns which result in nearly the same predicted segmentation for arbitrary inputs.

BlurNet: Defense by Filtering the Feature Maps

  • Ravi RajuM. Lipasti
  • Computer Science
    2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)
  • 2020
This paper proposes BlurNet, a defense against the RP2 attack, and motivates the defense with a frequency analysis of the first layer feature maps of the network on the LISA dataset, which shows that high frequency noise is introduced into the input image by theRP2 algorithm.

A study of the effect of JPG compression on adversarial images

It is found that JPG compression often reverses the drop in classification accuracy to a large extent, but not always, and as the magnitude of the perturbations increases, JPG recompression alone is insufficient to reverse the effect.

Towards Understanding Limitations of Pixel Discretization Against Adversarial Attacks

There is strong evidence that pixel discretization is unlikely to work on all but the simplest of the datasets, and arguments present insights why some other preprocessing defenses may be insecure.

An Adaptive View of Adversarial Robustness from Test-time Smoothing Defense

This paper presents an adaptive view of the issue via evaluating various test-time smoothing defense against white-box untargeted adversarial examples, and illustrates the non-monotonic relation between adversarial attacks and smoothing defenses.

Deflecting Adversarial Attacks with Pixel Deflection

This paper presents an algorithm to process an image so that classification accuracy is significantly preserved in the presence of adversarial manipulations, and demonstrates experimentally that the combination of these techniques enables the effective recovery of the true class, against a variety of robust attacks.

Mitigating adversarial effects through randomization

This paper proposes to utilize randomization at inference time to mitigate adversarial effects, and uses two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input image in a random manner.

Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser

High-level representation guided denoiser (HGD) is proposed as a defense for image classification by using a loss function defined as the difference between the target model's outputs activated by the clean image and denoised image.

Towards Deep Learning Models Resistant to Adversarial Attacks

This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.

Generalizable Data-Free Objective for Crafting Universal Adversarial Perturbations

This paper presents a novel, generalizable and data-free approach for crafting universal adversarial perturbations, and shows that the current deep learning models are now at an increased risk, since the objective generalizes across multiple tasks without the requirement of training data for crafting the perturbation.
...