Corpus ID: 56657912

Benchmarking Neural Network Robustness to Common Corruptions and Perturbations

@article{Hendrycks2019BenchmarkingNN,
  title={Benchmarking Neural Network Robustness to Common Corruptions and Perturbations},
  author={Dan Hendrycks and Thomas G. Dietterich},
  journal={ArXiv},
  year={2019},
  volume={abs/1903.12261}
}
In this paper we establish rigorous benchmarks for image classifier robustness. [...] Key Result Together our benchmarks may aid future work toward networks that robustly generalize.Expand
552 Citations
Improving Robustness of DNNs against Common Corruptions via Gaussian Adversarial Training
Are Adversarial Robustness and Common Perturbation Robustness Independent Attributes ?
Gaussian Noise Saturation Elastic Transform Wave Transform Obstruction Gaussian Blur
  • 2020
Increasing the robustness of DNNs against image corruptions by playing the Game of Noise
Adversarially Robust Classifier with Covariate Shift Adaptation
Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training
Improving robustness against common corruptions with frequency biased models
COVARIATE SHIFT ADAPTATION FOR ADVERSARIALLY ROBUST CLASSIFIER
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 71 REFERENCES
Measuring Neural Net Robustness with Constraints
On Detecting Adversarial Perturbations
Adversarially Robust Generalization Requires More Data
Towards Evaluating the Robustness of Neural Networks
Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise
Improving the Robustness of Deep Neural Networks via Stability Training
Ground-Truth Adversarial Examples
Robust Physical-World Attacks on Deep Learning Models
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
...
1
2
3
4
5
...