Corpus ID: 236318431

On the Certified Robustness for Ensemble Models and Beyond

@article{Yang2021OnTC,
  title={On the Certified Robustness for Ensemble Models and Beyond},
  author={Zhuolin Yang and Linyi Li and Xiaojun Xu and Bhavya Kailkhura and Tao Xie and Bo Li},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.10873}
}
Recent studies show that deep neural networks (DNN) are vulnerable to adversarial examples, which aim to mislead DNNs by adding perturbations with small magnitude. To defend against such attacks, both empirical and theoretical defense approaches have been extensively studied for a single ML model. In this work, we aim to analyze and provide the certified robustness for ensemble ML models, together with the sufficient and necessary conditions of robustness for different ensemble protocols… Expand

References

SHOWING 1-10 OF 64 REFERENCES
SoK: Certified Robustness for Deep Neural Networks
TLDR
This paper provides a taxonomy for the robustness verification and training approaches, and provides an open-sourced unified platform to evaluate 20+ representative verification and corresponding robust training approaches on a wide range of DNNs. Expand
Improving Adversarial Robustness via Promoting Ensemble Diversity
TLDR
A new notion of ensemble diversity in the adversarial setting is defined as the diversity among non-maximal predictions of individual members, and an adaptive diversity promoting (ADP) regularizer is presented to encourage the diversity, which leads to globally better robustness for the ensemble by making adversarial examples difficult to transfer among individual members. Expand
DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles
TLDR
DVERGE is proposed, which isolates the adversarial vulnerability in each sub-model by distilling non-robust features, and diversifies the adversarian vulnerability to induce diverse outputs against a transfer attack, and enables the improved robustness when more sub-models are added to the ensemble. Expand
Enhancing Certifiable Robustness via a Deep Model Ensemble
TLDR
The proposed ensemble framework with certified robustness, RobBoost, formulates the optimal model selection and weighting task as an optimization problem on a lower bound of classification margin, which can be efficiently solved using coordinate descent. Expand
Enhancing Certified Robustness of Smoothed Classifiers via Weighted Model Ensembling
TLDR
A Smoothed WEighted ENsembling (SWEEN) scheme to improve the performance of randomized smoothed classifiers and theoretically analyze the expressive power of the SWEEN function class and show that SWEen can be trained to achieve near-optimal risk in the randomized smoothing regime. Expand
A Framework for robustness Certification of Smoothed Classifiers using F-Divergences
TLDR
This paper extends randomized smoothing procedures to handle arbitrary smoothing measures and prove robustness of the smoothed classifier by using $f-divergences and achieves state-of-the-art certified robustness on MNIST, CIFAR-10 and ImageNet and also audio classification task, Librispeech, with respect to several classes of adversarial perturbations. Expand
Scalable Verified Training for Provably Robust Image Classification
TLDR
This work shows how a simple bounding technique, interval bound propagation (IBP), can be exploited to train large provably robust neural networks that beat the state-of-the-art in verified accuracy and allows the largest model to be verified beyond vacuous bounds on a downscaled version of IMAGENET. Expand
Improving Adversarial Robustness of Ensembles with Diversity Training
TLDR
Diversity Training, a novel method to train an ensemble of models with uncorrelated loss functions, significantly improves the adversarial robustness of ensembles and can be combined with existing methods to create a stronger defense against transfer-based attacks. Expand
Mixture of Robust Experts (MoRE): A Flexible Defense Against Multiple Perturbations
TLDR
This work assembles a set of expert networks to achieve superior accuracy performance under various perturbation types through a well designed gating mechanism and shows that the Mixture of Robust Experts (MoRE) approach enables a flexible and expandable integration of a broad range of robust experts with superior performance. Expand
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee. Expand
...
1
2
3
4
5
...