Corpus ID: 211171876

Randomized Smoothing of All Shapes and Sizes

@inproceedings{Yang2020RandomizedSO,
  title={Randomized Smoothing of All Shapes and Sizes},
  author={Greg Yang and Tony Duan and Edward J. Hu and Hadi Salman and Ilya P. Razenshteyn and Jungshian Li},
  booktitle={ICML},
  year={2020}
}
Randomized smoothing is the current state-of-the-art defense with provable robustness against $\ell_2$ adversarial attacks. Many works have devised new randomized smoothing schemes for other metrics, such as $\ell_1$ or $\ell_\infty$; however, substantial effort was needed to derive such new guarantees. This begs the question: can we find a general theory for randomized smoothing? We propose a novel framework for devising and analyzing randomized smoothing schemes, and validate its… Expand
Improved, Deterministic Smoothing for L1 Certified Robustness
TLDR
This is the first work to provide deterministic “randomized smoothing” for a norm-based adversarial threat model while allowing for an arbitrary classifier to be used as a base classifier and without requiring an exponential number of smoothing samples. Expand
ANCER: Anisotropic Certification via Sample-wise Volume Maximization
TLDR
ANCER, a practical framework for obtaining anisotropic certificates for a given test set sample via volume maximization, is introduced and demonstrates that ANCER achieves state-of-the-art `1 and `2 certified accuracy on both CIFAR-10 and ImageNet at multiple radii, while certifying substantially larger regions in terms of volume, thus highlighting the benefits of moving away from isotropic analysis. Expand
On the Certified Robustness for Ensemble Models and Beyond
TLDR
It is proven that diversified gradient and large confidence margin are sufficient and necessary conditions for certifiably robust ensemble models under the model-smoothness assumption, and it is proved that an ensemble model can always achieve higher certified robustness than a single base model under mild conditions. Expand
EFFICIENT RANDOMIZED SMOOTHING BY DENOISING WITH LEARNED SCORE FUNCTION
  • 2020
The randomized smoothing with various noise distributions is a promising approach to protect classifiers from `p adversarial attacks. However, it requires an ensemble of classifiers trained withExpand
SoK: Certified Robustness for Deep Neural Networks
TLDR
This paper provides a taxonomy for the robustness verification and training approaches, and provides an open-sourced unified platform to evaluate 20+ representative verification and corresponding robust training approaches on a wide range of DNNs. Expand
Boosting the Certified Robustness of L-infinity Distance Nets
TLDR
This paper significantly boosts the certified robustness of l∞-distance nets through a careful analysis of its training process, and proposes a simple approach to address the issues above by using a novel objective function which combines a scaled cross-entropy loss with clipped hinge loss. Expand
PROVABLE DEFENSE BY DENOISED SMOOTHING WITH LEARNED SCORE FUNCTION
While randomized smoothing is an efficient method that presents certified robustness, it requires multiple classifiers for each noise type and scale. On the other hand, the denoised smoothingExpand
Supplementary Material for Progressive-Scale Blackbox Attack via Projective Gradient Estimation
In Appendix A, we show a summary of our theoretical results, compare it with related work, and demonstrate the complete proofs. In Appendix B, we visualize the key characteristics for improving theExpand
TSS: Transformation-Specific Smoothing for Robustness Certification
  • Linyi Li, Maurice Weber, +5 authors Bo Li
  • Computer Science, Mathematics
  • CCS
  • 2021
TLDR
The framework TSS leverages these certification strategies and combines with consistency-enhanced training to provide rigorous certification of robustness and is the first approach that achieves nontrivial certified robustness on the large-scale ImageNet dataset. Expand
Random Smoothing Might be Unable to Certify 𝓁∞ Robustness for High-Dimensional Images
TLDR
Any noise distribution D over R that provides `p robustness for all base classifiers with p > 2 must satisfy E η i = Ω(d1−2/p (1− δ)/δ) for 99% of the features of vector η ∼ D, where is the robust radius and δ is the score gap between the highest-scored class and the runner-up. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 104 REFERENCES
A Framework for robustness Certification of Smoothed Classifiers using F-Divergences
TLDR
This paper extends randomized smoothing procedures to handle arbitrary smoothing measures and prove robustness of the smoothed classifier by using $f-divergences and achieves state-of-the-art certified robustness on MNIST, CIFAR-10 and ImageNet and also audio classification task, Librispeech, with respect to several classes of adversarial perturbations. Expand
Certified Adversarial Robustness via Randomized Smoothing
TLDR
Strong empirical results suggest that randomized smoothing is a promising direction for future research into adversarially robust classification on smaller-scale datasets where competing approaches to certified $\ell_2$ robustness are viable, smoothing delivers higher certified accuracies. Expand
ImageNet: A large-scale hierarchical image database
TLDR
A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets. Expand
Learning Multiple Layers of Features from Tiny Images
TLDR
It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network. Expand
Certified Robustness to Adversarial Examples with Differential Privacy
TLDR
This paper presents the first certified defense that both scales to large networks and datasets and applies broadly to arbitrary model types, based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired privacy formalism. Expand
Banach spaces for analysts, volume 25
  • 1996
Banach Spaces for Analysts
Zur Frage der Geschwindigkeit des Wachstums und der Auflösung der Krystallflagen
  • Zeitschrift für Krystallographie und Mineralogie,
  • 1901
A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks
TLDR
This paper unify all existing LP-relaxed verifiers, to the best of the knowledge, under a general convex relaxation framework, which works for neural networks with diverse architectures and nonlinearities and covers both primal and dual views of robustness verification. Expand
Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers
TLDR
It is demonstrated through extensive experimentation that this method consistently outperforms all existing provably $\ell-2$-robust classifiers by a significant margin on ImageNet and CIFAR-10, establishing the state-of-the-art for provable $\ell_ 2$-defenses. Expand
...
1
2
3
4
5
...