Corpus ID: 204925471

Certified Adversarial Robustness with Additive Noise

  title={Certified Adversarial Robustness with Additive Noise},
  author={Bai Li and Changyou Chen and W. Wang and L. Carin},
  • Bai Li, Changyou Chen, +1 author L. Carin
  • Published in NeurIPS 2019
  • Computer Science, Mathematics
  • The existence of adversarial data examples has drawn significant attention in the deep-learning community; such data are seemingly minimally perturbed relative to the original data, but lead to very different outputs from a deep-learning algorithm. Although a significant body of work on developing defensive models has been considered, most such models are heuristic and are often vulnerable to adaptive attacks. Defensive methods that provide theoretical robustness guarantees have been studied… CONTINUE READING
    79 Citations

    Figures, Tables, and Topics from this paper

    Explore Further: Topics Discussed in This Paper

    Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation
    • 14
    • PDF
    A Distributional Robustness Certificate by Randomized Smoothing
    • Highly Influenced
    • PDF
    Adversarial Training and Robustness for Multiple Perturbations
    • 95
    • PDF
    A Stochastic Neural Network for Attack-Agnostic Adversarial Robustness
    • PDF
    Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
    • 54
    • PDF
    Adversarial robustness guarantees for random deep neural networks
    • 1
    • PDF
    Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing
    • 22
    • PDF
    How to compare adversarial robustness of classifiers from a global perspective
    • PDF
    On Pruning Adversarially Robust Neural Networks
    • 5


    Certified Robustness to Adversarial Examples with Differential Privacy
    • 252
    • Highly Influential
    • PDF
    On the Connection between Differential Privacy and Adversarial Robustness in Machine Learning
    • 12
    • Highly Influential
    Adversarial Examples Are a Natural Consequence of Test Error in Noise
    • 131
    • PDF
    Towards Deep Learning Models Resistant to Adversarial Attacks
    • 2,776
    • Highly Influential
    • PDF
    Scaling provable adversarial defenses
    • 225
    • Highly Influential
    • PDF
    Provable defenses against adversarial examples via the convex outer adversarial polytope
    • 661
    • PDF
    Ensemble Adversarial Training: Attacks and Defenses
    • 1,101
    • PDF
    Certified Defenses against Adversarial Examples
    • 468
    • PDF
    On Norm-Agnostic Robustness of Adversarial Training
    • 9
    • PDF
    Adversarial vulnerability for any classifier
    • 142
    • PDF