• Corpus ID: 209415135

# n-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers

@article{Sharif2019nMLMA,
title={n-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers},
author={Mahmood Sharif and Lujo Bauer and Michael K. Reiter},
journal={ArXiv},
year={2019},
volume={abs/1912.09059}
}
• Published 19 December 2019
• Computer Science
• ArXiv
This paper proposes a new defense called $n$-ML against adversarial examples, i.e., inputs crafted by perturbing benign inputs by small amounts to induce misclassifications by classifiers. Inspired by $n$-version programming, $n$-ML trains an ensemble of $n$ classifiers, and inputs are classified by a vote of the classifiers in the ensemble. Unlike prior such approaches, however, the classifiers in the ensemble are trained specifically to classify adversarial examples differently, rendering it…
4 Citations

## Figures and Tables from this paper

Certifying Joint Adversarial Robustness for Model Ensembles
• Computer Science, Environmental Science
ArXiv
• 2020
The robustness of various models ensembles, including models trained using cost-sensitive robustness to be diverse, is evaluated to improve understanding of the potential effectiveness of ensemble models as a defense against adversarial examples.
Ensemble-based Adversarial Defense Using Diversified Distance Mapping
• Computer Science
• 2020
It is demonstrated that the ensembles based on DMLs can achieve high benign accuracy while exhibiting robustness against adversarial attacks using multiple white-box techniques along with AutoAttack.
Adaptive Noise Injection for Training Stochastic Student Networks from Deterministic Teachers
• Computer Science
2020 25th International Conference on Pattern Recognition (ICPR)
• 2021
This work presents a conceptually clear adaptive noise injection mechanism in combination with teacher-initialisation, which adjusts its degree of randomness dynamically through the computation of mini-batch statistics, embedded within a simple framework to obtain stochastic networks from existing deterministic networks.
Defense Through Diverse Directions
• Computer Science
ICML
• 2020
By encouraging the network to distribute evenly across inputs, the network becomes less susceptible to localized, brittle features which imparts a natural robustness to targeted perturbations.

## References

SHOWING 1-10 OF 86 REFERENCES
The Limitations of Deep Learning in Adversarial Settings
• Computer Science
2016 IEEE European Symposium on Security and Privacy (EuroS&P)
• 2016
This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs.
Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers
• Computer Science
NeurIPS
• 2019
It is demonstrated through extensive experimentation that this method consistently outperforms all existing provably $\ell-2$-robust classifiers by a significant margin on ImageNet and CIFAR-10, establishing the state-of-the-art for provable $\ell_ 2$-defenses.
On Detecting Adversarial Perturbations
• Computer Science
ICLR
• 2017
It is shown empirically that adversarial perturbations can be detected surprisingly well even though they are quasi-imperceptible to humans.
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models
• Computer Science
ICLR
• 2018
The proposed Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against adversarial perturbations, is empirically shown to be consistently effective against different attack methods and improves on existing defense strategies.
On the Suitability of Lp-Norms for Creating and Preventing Adversarial Examples
• Computer Science
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
• 2018
It is demonstrated that nearness of inputs as measured by Lp-norms is neither necessary nor sufficient for perceptual similarity, which has implications for both creating and defending against adversarial examples.
Towards Robust Detection of Adversarial Examples
• Computer Science
NeurIPS
• 2018
This paper presents a novel training procedure and a thresholding test strategy, towards robust detection of adversarial examples, and proposes to minimize the reverse cross-entropy (RCE), which encourages a deep network to learn latent representations that better distinguish adversarialExamples from normal ones.
Improving Transferability of Adversarial Examples With Input Diversity
• Computer Science
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2019
This work proposes to improve the transferability of adversarial examples by creating diverse input patterns by applying random transformations to the input images at each iteration, and shows that the proposed attack method can generate adversarialExamples that transfer much better to different networks than existing baselines.
On the (Statistical) Detection of Adversarial Examples
• Computer Science
ArXiv
• 2017
It is shown that statistical properties of adversarial examples are essential to their detection, and they are not drawn from the same distribution than the original data, and can thus be detected using statistical tests.
Explaining and Harnessing Adversarial Examples
• Computer Science
ICLR
• 2015
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.
Detecting Adversarial Samples from Artifacts
• Computer Science
ArXiv
• 2017
This paper investigates model confidence on adversarial samples by looking at Bayesian uncertainty estimates, available in dropout neural networks, and by performing density estimation in the subspace of deep features learned by the model, and results show a method for implicit adversarial detection that is oblivious to the attack algorithm.