• Corpus ID: 125918953

Robust Pre-Processing: A Robust Defense Method Against Adversary Attack

@article{Rakin2018RobustPA,
  title={Robust Pre-Processing: A Robust Defense Method Against Adversary Attack},
  author={Adnan Siraj Rakin and Zhezhi He and Boqing Gong and Deliang Fan},
  journal={arXiv: Learning},
  year={2018}
}
Deep learning algorithms and networks are vulnerable to perturbed inputs which are known as the adversarial attack. Many defense methodologies have been investigated to defend such adversarial attack. In this work, we propose a novel methodology to defend the existing powerful attack model. Such attack models have achieved record success against MNIST dataset to force it to miss-classify all of its inputs. Whereas Our proposed defense method robust pre-processing achieves the best accuracy… 

Figures and Tables from this paper

Blind Pre-Processing: A Robust Defense Method Against Adversarial Examples
TLDR
This work introduces a new attacking scheme for the attacker and set a practical constraint for white box attack and presents the best defense ever reported against some of the recent strong attacks.
Adversarial Examples in Deep Learning: Characterization and Divergence
TLDR
This paper provides a general formulation of adversarial examples and elaborate on the basic principle for adversarial attack algorithm design, and conducts extensive experimental study on adversarial behavior in easy and hard attacks under deep learning models with different hyperparameters and different deep learning frameworks.
FAdeML: Understanding the Impact of Pre-Processing Noise Filtering on Adversarial Machine Learning
TLDR
This work generates an adversarial attack image by exploiting the "VGGNet" DNN trained for the "German Traffic Sign Recognition Benchmarks" dataset, which despite having no visual noise, can cause a classifier to misclassify even in the presence of preprocessing noise filters.
Generative Adversarial Perturbations
TLDR
Novel generative models for creating adversarial examples, slightly perturbed images resembling natural images but maliciously crafted to fool pre-trained models, obviating the need for hand-crafting attack methods for each task are proposed.

References

SHOWING 1-10 OF 22 REFERENCES
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
TLDR
The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN, and analytically investigates the generalizability and robustness properties granted by the use of defensive Distillation when training DNNs.
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
Ensemble Adversarial Training: Attacks and Defenses
TLDR
This work finds that adversarial training remains vulnerable to black-box attacks, where perturbations computed on undefended models are transferred to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step.
The Limitations of Deep Learning in Adversarial Settings
TLDR
This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs.
Towards Deep Neural Network Architectures Robust to Adversarial Examples
TLDR
Deep Contractive Network is proposed, a model with a new end-to-end training procedure that includes a smoothness penalty inspired by the contractive autoencoder (CAE) to increase the network robustness to adversarial examples, without a significant performance penalty.
Adversarial examples in the physical world
TLDR
It is found that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera, which shows that even in physical world scenarios, machine learning systems are vulnerable to adversarialExamples.
Evasion Attacks against Machine Learning at Test Time
TLDR
This work presents a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks.
Dimensionality Reduction as a Defense against Evasion Attacks on Machine Learning Classifiers
TLDR
A strategy for incorporating dimensionality reduction via Principal Component Analysis to enhance the resilience of machine learning, targeting both the classification and the training phase is presented and investigated.
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
TLDR
Two feature squeezing methods are explored: reducing the color bit depth of each pixel and spatial smoothing, which are inexpensive and complementary to other defenses, and can be combined in a joint detection framework to achieve high detection rates against state-of-the-art attacks.
Thermometer Encoding: One Hot Way To Resist Adversarial Examples
TLDR
A simple modification to standard neural network ar3 chitectures, thermometer encoding is proposed, which significantly increases the robustness of the network to adversarial examples, and the proper ties of these networks are explored, providing evidence that thermometer encodings help neural networks to find more-non-linear decision boundaries.
...
...