Corpus ID: 168170150

Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness

@inproceedings{Mahloujifar2019EmpiricallyMC,
  title={Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness},
  author={Saeed Mahloujifar and X. Zhang and Mohammad Mahmoody and David Evans},
  booktitle={NeurIPS},
  year={2019}
}
  • Saeed Mahloujifar, X. Zhang, +1 author David Evans
  • Published in NeurIPS 2019
  • Mathematics, Computer Science
  • Many recent works have shown that adversarial examples that fool classifiers can be found by minimally perturbing a normal input. Recent theoretical results, starting with Gilmer et al. (2018b), show that if the inputs are drawn from a concentrated metric probability space, then adversarial examples with small perturbation are inevitable. A concentrated space has the property that any subset with $\Omega(1)$ (e.g., 1/100) measure, according to the imposed distribution, has small distance to… CONTINUE READING
    7 Citations

    Figures, Tables, and Topics from this paper

    Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models
    • 4
    • Highly Influenced
    • PDF
    Provable tradeoffs in adversarially robust classification
    • 7
    • PDF
    Lower Bounds for Adversarially Robust PAC Learning
    • 8
    • PDF
    One Neuron to Fool Them All
    • 1
    • PDF
    Local intrinsic dimensionality estimators based on concentration of measure
    • J. Bac, A. Zinovyev
    • Computer Science, Mathematics
    • 2020 International Joint Conference on Neural Networks (IJCNN)
    • 2020
    • PDF

    References

    SHOWING 1-10 OF 53 REFERENCES
    Lower Bounds on Adversarial Robustness from Optimal Transport
    • 25
    • PDF
    Scalable Verified Training for Provably Robust Image Classification
    • 16
    • PDF
    Adversarial vulnerability for any classifier
    • 142
    • Highly Influential
    • PDF
    Scaling provable adversarial defenses
    • 225
    • PDF
    Rademacher Complexity for Adversarially Robust Generalization
    • 82
    • PDF
    Provable defenses against adversarial examples via the convex outer adversarial polytope
    • 661
    • PDF
    Adversarial examples from computational constraints
    • 129
    • PDF