# Towards Robust Neural Networks via Random Self-ensemble

@inproceedings{Liu2018TowardsRN,
title={Towards Robust Neural Networks via Random Self-ensemble},
author={Xuanqing Liu and Minhao Cheng and Huan Zhang and Cho-Jui Hsieh},
booktitle={ECCV},
year={2018}
}
Recent studies have revealed the vulnerability of deep neural networks - A small adversarial perturbation that is imperceptible to human can easily make a well-trained deep neural network mis-classify. [...] Key Method We show that our algorithm is equivalent to ensemble an infinite number of noisy models $f_\epsilon$ without any additional memory overhead, and the proposed training procedure based on noisy stochastic gradient descent can ensure the ensemble model has good predictive capability. Our algorithm…Expand
224 Citations
• Computer Science, Mathematics
• ICLR
• 2019
This work model randomness under the framework of Bayesian Neural Network to formally learn the posterior distribution of models in a scalable way and formulate the mini-max problem in BNN to learn the best model distribution under adversarial attacks, leading to an adversarial-trained Bayesian neural net. Expand
A Stochastic Neural Network for Attack-Agnostic Adversarial Robustness
• Computer Science
• ArXiv
• 2020
While existing SNNs inject learned or hand-tuned isotropic noise, this SNN learns an anisotropic noise distribution to optimize a learning-theoretic bound on adversarial robustness. Expand
Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses
This paper proposes hierarchical random switching (HRS), which protects neural networks through a novel randomization scheme, and proposes Defense Efficiency Score (DES), a comprehensive metric that measures the gain in unsuccessful attack attempts at the cost of drop in test accuracy of any defense. Expand
Bayes without Bayesian Learning for Resisting Adversarial Attacks
• Computer Science
• 2020 Eighth International Symposium on Computing and Networking (CANDAR)
• 2020
This paper proposes a new defense algorithm called Bayes without Bayesian Learning, which does not add the training phase in resisting adversarial attacks and has significantly improved the accuracy of pretrained CNN models under the high level of attacks. Expand
Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach
This paper provides a theoretical justification for converting robustness analysis into a local Lipschitz constant estimation problem, and proposes to use the Extreme Value Theory for efficient evaluation, which yields a novel robustness metric called CLEVER, which is short for Cross LPschitz Extreme Value for nEtwork Robustness. Expand
Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness Against Adversarial Attack
• Computer Science, Mathematics
• 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2019
Parametric-Noise-Injection (PNI) is proposed which involves trainable Gaussian noise injection at each layer on either activation or weights through solving the Min-Max optimization problem, embedded with adversarial training, and effectively improves DNN's robustness against adversarial attack. Expand
Convergence of Adversarial Training in Overparametrized Neural Networks
• Computer Science, Mathematics
• NeurIPS
• 2019
This paper provides a partial answer to the success of adversarial training, by showing that it converges to a network where the surrogate loss with respect to the the attack algorithm is within $\epsilon$ of the optimal robust loss. Expand
Detecting Adversarial Examples with Bayesian Neural Network
• Computer Science, Mathematics
• ArXiv
• 2021
A novel Bayesian adversarial example detector, short for BATector, is proposed to use the randomness of Bayesian neural network to simulate hidden layer output distribution and leverage the distribution dispersion to detect adversarial examples. Expand
Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness
• Computer Science
• ArXiv
• 2019
It is shown that shrinking the model size through proper weight pruning can even be helpful to improve the DNN robustness under adversarial attack. Expand
• Computer Science
• ArXiv
• 2021
The proposed Gradient Diversity (GradDiv) regularizations improve the adversarial robustness of randomized neural networks against a variety of state-of-the-art attack methods and efficiently reduces the transferability among sample models of randomized Neural networks. Expand

#### References

SHOWING 1-10 OF 46 REFERENCES
Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach
This paper provides a theoretical justification for converting robustness analysis into a local Lipschitz constant estimation problem, and proposes to use the Extreme Value Theory for efficient evaluation, which yields a novel robustness metric called CLEVER, which is short for Cross LPschitz Extreme Value for nEtwork Robustness. Expand
Towards Evaluating the Robustness of Neural Networks
• Computer Science
• 2017 IEEE Symposium on Security and Privacy (SP)
• 2017
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced. Expand
Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks
• Mathematics, Computer Science
• ArXiv
• 2017
It is empirically shown that ensemble methods not only improve the accuracy of neural networks on test data but also increase their robustness against adversarial perturbations. Expand
Towards Deep Learning Models Resistant to Adversarial Attacks
• Computer Science, Mathematics
• ICLR
• 2018
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee. Expand
• Computer Science, Mathematics
• ArXiv
• 2017
This paper investigates model confidence on adversarial samples by looking at Bayesian uncertainty estimates, available in dropout neural networks, and by performing density estimation in the subspace of deep features learned by the model, and results show a method for implicit adversarial detection that is oblivious to the attack algorithm. Expand
• Computer Science
• ICLR
• 2018
This paper proposes to utilize randomization at inference time to mitigate adversarial effects, and uses two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input image in a random manner. Expand
Stochastic Activation Pruning for Robust Adversarial Defense
Stochastic Activation Pruning (SAP) is proposed, a mixed strategy for adversarial defense that prunes a random subset of activations (preferentially pruning those with smaller magnitude) and scales up the survivors to compensate. Expand
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
• Computer Science, Mathematics
• 2016 IEEE Symposium on Security and Privacy (SP)
• 2016
The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN, and analytically investigates the generalizability and robustness properties granted by the use of defensive Distillation when training DNNs. Expand
Elastic-net attacks to DNNs (EAD) feature $L_1$-oriented adversarial examples and include the state-of-the-art$L_2$ attack as a special case, suggesting novel insights on leveraging $L-1$ distortion in adversarial machine learning and security implications ofDNNs. Expand