Utilizing Class Separation Distance for the Evaluation of Corruption Robustness of Machine Learning Classifiers

@article{Siedel2022UtilizingCS,
  title={Utilizing Class Separation Distance for the Evaluation of Corruption Robustness of Machine Learning Classifiers},
  author={George J. Siedel and Silvia Vock and Andrey Morozov and Stefan Voss},
  journal={ArXiv},
  year={2022},
  volume={abs/2206.13405}
}
Robustness is a fundamental pillar of Machine Learning (ML) classifiers, substantially determining their reliability. Methods for assessing classifier robustness are therefore essential. In this work, we address the challenge of evaluating corruption robustness in a way that allows comparability and interpretability on a given dataset. We propose a test data augmentation method that uses a robustness distance πœ– derived from the datasets minimal class separation distance. The resulting MSCR… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 24 REFERENCES

Analysis of classifiers’ robustness to adversarial perturbations

TLDR
A general upper bound on the robustness of classifiers to adversarial perturbations is established, and the phenomenon of adversarial instability is suggested to be due to the low flexibility ofclassifiers, compared to the difficulty of the classification task (captured mathematically by the distinguishability measure).

Statistically Robust Neural Network Classification

TLDR
A statistically robust risk (SRR) framework is introduced which measures robustness in expectation over both network inputs and a corruption distribution and shows both theoretically and empirically that it can scale to higher-dimensional networks by providing superior generalization performance compared with comparable adversarial risks.

Improving Robustness Without Sacrificing Accuracy with Patch Gaussian Augmentation

TLDR
This work introduces Patch Gaussian, a simple augmentation scheme that adds noise to randomly selected patches in an input image that leads to reduced sensitivity to high frequency noise(similar to Gaussian) while retaining the ability to take advantage of relevant high frequency information in the image.

A Closer Look at Accuracy vs. Robustness

TLDR
It is proved that real image datasets are actually separated and it is argued that robustness and accuracy should both be achievable for benchmark datasets through locally Lipschitz functions, and hence, there should be no inherent tradeoff between robusts and accuracy.

Unlabeled Data Improves Adversarial Robustness

TLDR
It is proved that unlabeled data bridges the complexity gap between standard and robust classification: a simple semisupervised learning procedure (self-training) achieves high robust accuracy using the same number of labels required for achieving high standard accuracy.

Benchmarking Neural Network Robustness to Common Corruptions and Perturbations

TLDR
This paper standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications, and proposes a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations.

Understanding the Decision Boundary of Deep Neural Networks: An Empirical Study

TLDR
The minimum distance of data points to the decision boundary and how this margin evolves over the training of a deep neural network is studied to observe that the decision Boundary moves closer to natural images over training.

Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach

TLDR
This paper provides a theoretical justification for converting robustness analysis into a local Lipschitz constant estimation problem, and proposes to use the Extreme Value Theory for efficient evaluation, which yields a novel robustness metric called CLEVER, which is short for Cross LPschitz Extreme Value for nEtwork Robustness.

Adversarial Examples Are a Natural Consequence of Test Error in Noise

TLDR
It is suggested that improving adversarial robustness should go hand in hand with improving performance in the presence of more general and realistic image corruptions, and that future adversarial defenses consider evaluating the robustness of their methods to distributional shift with benchmarks such as Imagenet-C.

Certified Adversarial Robustness via Randomized Smoothing

TLDR
Strong empirical results suggest that randomized smoothing is a promising direction for future research into adversarially robust classification on smaller-scale datasets where competing approaches to certified $\ell_2$ robustness are viable, smoothing delivers higher certified accuracies.