Corpus ID: 195317007

A Fourier Perspective on Model Robustness in Computer Vision

@article{Yin2019AFP,
  title={A Fourier Perspective on Model Robustness in Computer Vision},
  author={Dong Yin and Raphael Gontijo Lopes and Jonathon Shlens and E. D. Cubuk and J. Gilmer},
  journal={ArXiv},
  year={2019},
  volume={abs/1906.08988}
}
Achieving robustness to distributional shift is a longstanding and challenging goal of computer vision. [...] Key Result Towards this end we observe that AutoAugment, a recently proposed data augmentation policy optimized for clean accuracy, achieves state-of-the-art robustness on the CIFAR-10-C and ImageNet-C benchmarks.Expand
Improving Robustness Without Sacrificing Accuracy with Patch Gaussian Augmentation
Improving robustness against common corruptions with frequency biased models
Adversarial Robustness Across Representation Spaces
Natural Perturbed Training for General Robustness of Neural Network Classifiers
An Effective Anti-Aliasing Approach for Residual Networks
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 30 REFERENCES
Adversarial Examples Are a Natural Consequence of Test Error in Noise
Measuring the tendency of CNNs to Learn Surface Statistical Regularities
Excessive Invariance Causes Adversarial Vulnerability
On the Structural Sensitivity of Deep Convolutional Networks to the Directions of Fourier Basis Functions
  • Yusuke Tsuzuku, Issei Sato
  • Computer Science
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
AutoAugment: Learning Augmentation Policies from Data
Why do deep convolutional networks generalize so poorly to small image transformations?
Towards Deep Learning Models Resistant to Adversarial Attacks
Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples
...
1
2
3
...