# Understanding Intrinsic Robustness Using Label Uncertainty

@inproceedings{Zhang2021UnderstandingIR, title={Understanding Intrinsic Robustness Using Label Uncertainty}, author={Xiao Zhang and David Evans}, year={2021} }

A fundamental question in adversarial machine learning is whether a robust classifier exists for a given task. A line of research has made some progress towards this goal by studying the concentration of measure, but we argue standard concentration fails to fully characterize the intrinsic robustness of a classification problem since it ignores data labels which are essential to any classification task. Building on a novel definition of label uncertainty, we empirically demonstrate that error…

## References

SHOWING 1-10 OF 49 REFERENCES

Human Uncertainty Makes Classification More Robust

- Computer Science2019 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2019

It is shown that, while contemporary classifiers fail to exhibit human-like uncertainty on their own, explicit training on this dataset closes this gap, supports improved generalization to increasingly out-of-training-distribution test datasets, and confers robustness to adversarial attacks.

RobustBench: a standardized adversarial robustness benchmark

- Computer ScienceNeurIPS Datasets and Benchmarks
- 2021

This work evaluates robustness of models for their benchmark with AutoAttack, an ensemble of white- and black-box attacks which was recently shown in a large-scale study to improve almost all robustness evaluations compared to the original publications.

Towards Stable and Efficient Training of Verifiably Robust Neural Networks

- Computer ScienceICLR
- 2020

CROWN-IBP is computationally efficient and consistently outperforms IBP baselines on training verifiably robust neural networks, and outperform all previous linear relaxation and bound propagation based certified defenses in $\ell_\infty$ robustness.

The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure

- Computer Science, MathematicsAAAI
- 2019

This work investigates the adversarial risk and robustness of classifiers and draws a connection to the well-known phenomenon of concentration of measure in metric measure spaces, showing that if the metric probability space of the test instance is concentrated, any classifier with some initial constant error is inherently vulnerable to adversarial perturbations.

Unlabeled Data Improves Adversarial Robustness

- Computer ScienceNeurIPS
- 2019

It is proved that unlabeled data bridges the complexity gap between standard and robust classification: a simple semisupervised learning procedure (self-training) achieves high robust accuracy using the same number of labels required for achieving high standard accuracy.

Towards Deep Learning Models Resistant to Adversarial Attacks

- Computer ScienceICLR
- 2018

This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.

Extremal properties of half-spaces for spherically invariant measures

- Mathematics
- 1978

Below we shall establish certain extremal properties of half-spaces for spherically symmetrical and, in particular, Gaussian (including infinite-dimensional) measures: we also prove inequalities for…

Adversarial Weight Perturbation Helps Robust Generalization

- Computer ScienceNeurIPS
- 2020

This paper proposes a simple yet effective Adversarial Weight Perturbation (AWP) to explicitly regularize the flatness of weight loss landscape, forming a double-perturbation mechanism in the adversarial training framework that adversarially perturbs both inputs and weights.

Concentration of measure and isoperimetric inequalities in product spaces

- Mathematics
- 1994

The concentration of measure phenomenon in product spaces roughly states that, if a set A in a product ΩN of probability spaces has measure at least one half, “most” of the points of Ωn are “close”…