• Corpus ID: 195317007

A Fourier Perspective on Model Robustness in Computer Vision

@inproceedings{Yin2019AFP,
  title={A Fourier Perspective on Model Robustness in Computer Vision},
  author={Dong Yin and Raphael Gontijo Lopes and Jonathon Shlens and Ekin Dogus Cubuk and Justin Gilmer},
  booktitle={NeurIPS},
  year={2019}
}
Achieving robustness to distributional shift is a longstanding and challenging goal of computer vision. [] Key Result Towards this end we observe that AutoAugment, a recently proposed data augmentation policy optimized for clean accuracy, achieves state-of-the-art robustness on the CIFAR-10-C and ImageNet-C benchmarks.
On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness
TLDR
A feature space for image transforms is developed, and a new measure in this space between augmentations and corruptions called the Minimal Sample Distance is used to demonstrate a strong correlation between similarity and performance.
Improving Robustness Without Sacrificing Accuracy with Patch Gaussian Augmentation
TLDR
This work introduces Patch Gaussian, a simple augmentation scheme that adds noise to randomly selected patches in an input image that leads to reduced sensitivity to high frequency noise(similar to Gaussian) while retaining the ability to take advantage of relevant high frequency information in the image.
Fourier-Based Augmentations for Improved Robustness and Uncertainty Calibration
TLDR
This work successfully harden a model against Fourier-based attacks, while producing superior-to-AugMix accuracy and calibration results on both the CIFar-10-C and CIFAR-100-C datasets.
How Does Frequency Bias Affect the Robustness of Neural Image Classifiers against Common Corruption and Adversarial Perturbations?
TLDR
This work proposes Jacobian frequency regularization for models’ Jacobians to have a larger ratio of low-frequency components and shows that biasing classifiers towards low (high)-frequency components can bring performance gain against high (low)-frequency corruption and adversarial perturbation, albeit with a tradeoff in performance for low ( high-frequency corruption.
Improving robustness against common corruptions with frequency biased models
TLDR
A mixture of two expert models specializing in high and low-frequency robustness, respectively are introduced and a new regularization scheme that minimizes the total variation (TV) of convolution feature-maps to increase high- frequencies robustness is proposed.
Adversarial Robustness Across Representation Spaces
TLDR
This work designs a theoretically sound algorithm with formal guarantees for the problem of training of deep neural networks that can be made simultaneously robust to perturbations applied in multiple natural representations spaces and demonstrates the effectiveness of this approach on standard datasets for image classification.
Revisiting Batch Normalization for Improving Corruption Robustness
TLDR
This work interpret corruption robustness as a domain shift and proposes to rectify batch normalization (BN) statistics for improving model robustness by perceiving the shift from the clean domain to the corruption domain as a style shift that is represented by the BN statistics.
Understanding robustness and generalization of artificial neural networks through Fourier masks
TLDR
An algorithm is developed that allows us to learn modulatory masks highlighting the essential input frequencies needed for preserving a trained network’s performance, and results indicate that the essential frequencies in question are effectively the ones used to achieve generalization in the first place.
NoisyMix: Boosting Model Robustness to Common Corruptions
TLDR
NoisyMix is a novel training scheme that promotes stability as well as leverages noisy augmentations in input and feature space to improve both model robustness and in-domain accuracy and provides theory to understand implicit regularization and robustness of NoisyMix.
Deeper Insights into ViTs Robustness towards Common Corruptions
TLDR
This paper investigates how CNN-like architectural designs and CNN-based data augmentation strategies impact on ViTs’ robustness towards common corruptions through an extensive and rigorous benchmarking, and introduces a novel conditional method enabling input-varied augmentations from two angles.
...
...

References

SHOWING 1-10 OF 30 REFERENCES
Adversarial Examples Are a Natural Consequence of Test Error in Noise
TLDR
It is suggested that improving adversarial robustness should go hand in hand with improving performance in the presence of more general and realistic image corruptions, and that future adversarial defenses consider evaluating the robustness of their methods to distributional shift with benchmarks such as Imagenet-C.
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
TLDR
This paper standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications, and proposes a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations.
Measuring the tendency of CNNs to Learn Surface Statistical Regularities
Deep CNNs are known to exhibit the following peculiarity: on the one hand they generalize extremely well to a test set, while on the other hand they are extremely sensitive to so-called adversarial
Excessive Invariance Causes Adversarial Vulnerability
TLDR
This work identifies an insufficiency of the standard cross-entropy loss as a reason for deep networks' striking failures on out-of-distribution inputs and provides the first approach tailored explicitly to overcome excessive invariance and resulting vulnerabilities.
On the Structural Sensitivity of Deep Convolutional Networks to the Directions of Fourier Basis Functions
  • Yusuke Tsuzuku, Issei Sato
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
The primal finding is that convolutional networks are sensitive to the directions of Fourier basis functions, and an algorithm is proposed to create shift-invariant universal adversarial perturbations available in black-box settings.
AutoAugment: Learning Augmentation Policies from Data
TLDR
This paper describes a simple procedure called AutoAugment to automatically search for improved data augmentation policies, which achieves state-of-the-art accuracy on CIFAR-10, CIFar-100, SVHN, and ImageNet (without additional data).
Generalisation in humans and deep neural networks
TLDR
The robustness of humans and current convolutional deep neural networks on object recognition under twelve different types of image degradations is compared and it is shown that DNNs trained directly on distorted images consistently surpass human performance on the exact distortion types they were trained on.
Why do deep convolutional networks generalize so poorly to small image transformations?
TLDR
The results indicate that the problem of insuring invariance to small image transformations in neural networks while preserving high accuracy remains unsolved.
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples
TLDR
The experimental results show that proposed “feature distillation” can significantly surpass the latest input-transformation based mitigations such as Quilting and TV Minimization in three aspects, including defense efficiency, accuracy of benign images after defense, and processing time per image.
...
...