• Corpus ID: 221761375

Certifying Confidence via Randomized Smoothing

@article{Kumar2020CertifyingCV,
  title={Certifying Confidence via Randomized Smoothing},
  author={Aounon Kumar and Alexander Levine and Soheil Feizi and Tom Goldstein},
  journal={ArXiv},
  year={2020},
  volume={abs/2009.08061}
}
Randomized smoothing has been shown to provide good certified-robustness guarantees for high-dimensional classification problems. It uses the probabilities of predicting the top two most-likely classes around an input point under a smoothing distribution to generate a certified radius for a classifier's prediction. However, most smoothing methods do not give us any information about the \emph{confidence} with which the underlying classifier (e.g., deep neural network) makes a prediction. In… 

Figures from this paper

Center Smoothing for Certifiably Robust Vector-Valued Functions

This work designs a smoothing procedure that can leverage the local, potentially low-dimensional, behaviour of the function around an input to obtain probabilistic robustness certificates and demonstrates the effectiveness of the method on multiple learning tasks involving vector-valued functions with a wide range of input and output dimensionalities.

Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-based Models Reliable?

This work presents the first study of certifiable robustness for DBU models, and proposes novel uncertainty attacks that fool models into assigning high confidence to OOD data and low confidence to ID data, respectively.

Relating Adversarially Robust Generalization to Flat Minima

This paper proposes average- and worst-case metrics to measure flatness in the robust loss landscape and shows a correlation between good robust generalization and flatness, i.e., whether robust loss changes significantly when perturbing weights.

Detection as Regression: Certified Object Detection by Median Smoothing

This work obtains the first model-agnostic, training-free, and certified defense for object detection against $\ell_2$-bounded attacks.

Policy Smoothing for Provably Robust Reinforcement Learning

An adaptive version of the Neyman-Pearson Lemma – a key lemma for smoothing-based certificates – where the adversarial perturbation at a particular time can be a stochastic function of current and previous observations and states as well as previous actions is proved.

Double Sampling Randomized Smoothing

Theoretically, under mild assumptions, it is proved that DSRS can certify Θ( √ d ) robust radius under ℓ 2 norm where d is the input dimension, implying thatDSRS may be able to break the curse of dimensionality of randomized smoothing.

O N THE C ERTIFIED R OBUSTNESS FOR E NSEMBLE M ODELS AND B EYOND

The lightweight Diversity Regularized Training (DRT) is proposed to train certifiably robust ensemble ML models and it is proved that an ensemble model can always achieve higher certi fied robustness than a single base model under mild conditions.

On the Certified Robustness for Ensemble Models and Beyond

The lightweight Diversity Regularized Training (DRT) is proposed to train certified robust ensemble ML models and it is proved that an ensemble model can always achieve higher certification robustness than a single base model under mild conditions.

Confidence-aware Training of Smoothed Classifiers for Certified Robustness

A simple training method leveraging the fundamental trade-off between accuracy and (adversar- ial) robustness to obtain robust smoothed classifiers, in particular, through a sample-wise control of robustness over the training samples.

Robust Perception through Equivariance

A framework that uses the dense intrinsic constraints in natural images to robustify inference to shift the burden of robustness from training to the inference algorithm, thereby allowing the model to adjust dynamically to each individual image’s unique and potentially novel characteristics at inference time.

References

SHOWING 1-10 OF 50 REFERENCES

Certified Adversarial Robustness via Randomized Smoothing

Strong empirical results suggest that randomized smoothing is a promising direction for future research into adversarially robust classification on smaller-scale datasets where competing approaches to certified $\ell_2$ robustness are viable, smoothing delivers higher certified accuracies.

Random Smoothing Might be Unable to Certify 𝓁∞ Robustness for High-Dimensional Images

Any noise distribution D over R that provides `p robustness for all base classifiers with p > 2 must satisfy E η i = Ω(d1−2/p (1− δ)/δ) for 99% of the features of vector η ∼ D, where is the robust radius and δ is the score gap between the highest-scored class and the runner-up.

Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness

It is shown that extending the smoothing technique to defend against other attack models can be challenging, especially in the high-dimensional regime, and it is established that Gaussian smoothing provides the best possible results, up to a constant factor, when p \geq 2.

Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers

This work offers adversarial robustness guarantees and associated algorithms for the discrete case where the adversary is $\ell_0$ bounded and exemplifies how the guarantees can be tightened with specific assumptions about the function class of the classifier such as a decision tree.

Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation

This paper proposes an efficient and certifiably robust defense against sparse adversarial attacks by randomly ablating input features, rather than using additive noise, and empirically demonstrates that the classifier is highly robust to modern sparse adversarian attacks on MNIST.

Robustness Certificates Against Adversarial Examples for ReLU Networks

This paper proposes attack-agnostic robustness certificates for a multi-label classification problem using a deep ReLU network that has a closed-form, is differentiable and is an order of magnitude faster to compute than the existing methods even for deep networks.

On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models

This work shows how a simple bounding technique, interval bound propagation (IBP), can be exploited to train large provably robust neural networks that beat the state-of-the-art in verified accuracy and allows the largest model to be verified beyond vacuous bounds on a downscaled version of ImageNet.

Randomized Smoothing of All Shapes and Sizes

It is shown that with only label statistics under random input perturbations, randomized smoothing cannot achieve nontrivial certified accuracy against perturbation of $\ell_p$-norm $\Omega(\min(1, d^{\frac{1}{p} - 1}{2}}))$, when the input dimension $d$ is large.

Second-Order Provable Defenses against Adversarial Attacks

This paper shows that if the eigenvalues of the Hessian of the network are bounded, the authors can compute a robustness certificate in the $l_2$ norm efficiently using convex optimization and derives a computationally-efficient differentiable upper bound on the curvature of a deep network.

Training verified learners with learned verifiers

Experiments show that the predictor-verifier architecture able to train networks to achieve state of the art verified robustness to adversarial examples with much shorter training times can be scaled to produce the first known verifiably robust networks for CIFAR-10.