Dropping Pixels for Adversarial Robustness
@article{Hosseini2019DroppingPF, title={Dropping Pixels for Adversarial Robustness}, author={Hossein Hosseini and Sreeram Kannan and Radha Poovendran}, journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)}, year={2019}, pages={91-97} }
Deep neural networks are vulnerable against adversarial examples. In this paper, we propose to train and test the networks with randomly subsampled images with high drop rates. We show that this approach significantly improves robustness against adversarial examples in all cases of bounded L0, L2 and L∞ perturbations, while reducing the standard accuracy by a small value. We argue that subsampling pixels can be thought to provide a set of robust features for the input image and, thus, improves…
8 Citations
Cognitive data augmentation for adversarial defense via pixel masking
- Computer SciencePattern Recognit. Lett.
- 2021
Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation
- Computer ScienceAAAI
- 2020
This paper proposes an efficient and certifiably robust defense against sparse adversarial attacks by randomly ablating input features, rather than using additive noise, and empirically demonstrates that the classifier is highly robust to modern sparse adversarian attacks on MNIST.
Defective Convolutional Networks
- Computer Science
- 2019
Robustness of convolutional neural networks (CNNs) has gained in importance on account of adversarial examples, i.e., inputs added as well-designed perturbations that are imperceptible to humans but…
DEFECTIVE CONVOLUTIONAL LAYERS LEARN RO-
- Computer Science
- 2019
Experimental results demonstrate the defective CNN has higher defense ability than the standard CNN against various types of attack, and achieves state-of-the-art performance against transfer-based attacks without applying any adversarial training.
Bio-inspired Robustness: A Review
- Computer ScienceArXiv
- 2021
A set of criteria for proper evaluaIon of DCNNs is proposed and different models according to these criteria are analyzed, to make DCCNs one step closer to the model of human vision.
Enhancing Certifiable Robustness via a Deep Model Ensemble
- Computer ScienceArXiv
- 2019
The proposed ensemble framework with certified robustness, RobBoost, formulates the optimal model selection and weighting task as an optimization problem on a lower bound of classification margin, which can be efficiently solved using coordinate descent.
Robust Machine Learning Systems: Challenges,Current Trends, Perspectives, and the Road Ahead
- Computer ScienceIEEE Design & Test
- 2020
Various challenges and probable solutions for security attacks on ML-inspired hardware and software techniques in smart cyber-physical systems (CPS) and Internet-of-Things (IoT).
Defective Convolutional Layers Learn Robust CNNs
- Computer ScienceArXiv
- 2019
Experimental results demonstrate the defective CNN has higher defense ability than the standard CNN against various types of attack, and achieves state-of-the-art performance against transfer-based attacks without applying any adversarial training.
References
SHOWING 1-10 OF 31 REFERENCES
Stochastic Activation Pruning for Robust Adversarial Defense
- Computer ScienceICLR
- 2018
Stochastic Activation Pruning (SAP) is proposed, a mixed strategy for adversarial defense that prunes a random subset of activations (preferentially pruning those with smaller magnitude) and scales up the survivors to compensate.
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
- Computer ScienceAISec@CCS
- 2017
It is concluded that adversarialExamples are significantly harder to detect than previously appreciated, and the properties believed to be intrinsic to adversarial examples are in fact not.
Mitigating adversarial effects through randomization
- Computer ScienceICLR
- 2018
This paper proposes to utilize randomization at inference time to mitigate adversarial effects, and uses two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input image in a random manner.
Certified Robustness to Adversarial Examples with Differential Privacy
- Computer Science2019 IEEE Symposium on Security and Privacy (SP)
- 2019
This paper presents the first certified defense that both scales to large networks and datasets and applies broadly to arbitrary model types, based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired privacy formalism.
Towards Deep Learning Models Resistant to Adversarial Attacks
- Computer ScienceICLR
- 2018
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
Explaining and Harnessing Adversarial Examples
- Computer ScienceICLR
- 2015
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.
Towards Evaluating the Robustness of Neural Networks
- Computer Science2017 IEEE Symposium on Security and Privacy (SP)
- 2017
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.
Thwarting Adversarial Examples: An L_0-Robust Sparse Fourier Transform
- Computer ScienceNeurIPS
- 2018
We give a new algorithm for approximating the Discrete Fourier transform of an approximately sparse signal that is robust to worst-case $L_0$ corruptions, namely that some coordinates of the signal…
Robust Physical-World Attacks on Deep Learning Models
- Computer Science
- 2017
This work proposes a general attack algorithm,Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions and shows that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints.
Adversarial Machine Learning at Scale
- Computer ScienceICLR
- 2017
This research applies adversarial training to ImageNet and finds that single-step attacks are the best for mounting black-box attacks, and resolution of a "label leaking" effect that causes adversarially trained models to perform better on adversarial examples than on clean examples.