Ensemble Generative Cleaning With Feedback Loops for Defending Adversarial Attacks

@article{Yuan2020EnsembleGC,
  title={Ensemble Generative Cleaning With Feedback Loops for Defending Adversarial Attacks},
  author={Jianhe Yuan and Zhihai He},
  journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2020},
  pages={578-587}
}
  • Jianhe YuanZhihai He
  • Published 23 April 2020
  • Computer Science
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Effective defense of deep neural networks against adversarial attacks remains a challenging problem, especially under powerful white-box attacks. In this paper, we develop a new method called ensemble generative cleaning with feedback loops (EGC-FL) for effective defense of deep neural networks. The proposed EGC-FL method is based on two central ideas. First, we introduce a transformed deadzone layer into the defense network, which consists of an orthonormal transform and a deadzone-based… 

Towards Robust Neural Networks via Orthogonal Diversity

A novel defense that aims at augmenting the model in order to learn features adaptive to diverse inputs, including adversarial examples is proposed and extensive empirical results demonstrate the adversarial robustness of the proposed DIO.

An Eye for an Eye: Defending against Gradient-based Attacks with Gradients

A Twostream Restoration Network (TRN) is proposed that can defend against a wide range of attack methods without significantly degrading the performance of benign inputs and is generalizable, scalable, and hard to bypass.

AGKD-BML: Defense Against Adversarial Attack by Attention Guided Knowledge Distillation and Bi-directional Metric Learning

This work proposes a novel adversarial training-based model by Attention Guided Knowledge Distillation and Bi-directional Metric Learning (AGKD-BML), which consistently outperforms the state-of-the-art approaches.

Advances in adversarial attacks and defenses in computer vision: A survey

This review article thoroughly discusses the first generation attacks and comprehensively cover the modern attacks and their defenses appearing in the prestigious sources of computer vision and machine learning research.

A novel approach to generating high-resolution adversarial examples

This work proposes a feasible approach, which improves on the AdvGAN framework through data augmentation, combined with PCA and KPCA to map the input instance’s main features onto the latent variables, and can generate strongly semantically adversarial examples with better transferability on prevailing DNNs classification models.

Toward Evaluating the Reliability of Deep-Neural-Network-Based IoT Devices

This article proposes a novel adversarial attack named nongradient attack (NGA), of which search strategy is effective but no longer depends on gradients to enhance the threat of adversarial examples, and proposes a new evaluation metric, i.e., composite criterion (CC) based on both ASR and accuracy, to better measure the effectiveness of adversaria training.

DISCO: Adversarial Defense with Local Implicit Functions

A novel adversarial defenses for image classification with local Implicit Module DISCO is proposed to remove adversarial perturbations by localized manifold projections and is shown to be data and parameter efficient and to mount defenses that transfers across datasets, classifiers and attacks.

Threat of Adversarial Attacks on Deep Learning in Computer Vision: Survey II

A literature review of the contributions made by the computer vision community in adversarial attacks on deep learning until the advent of year 2018, which focuses on the advances in this area since 2018.

References

SHOWING 1-10 OF 41 REFERENCES

Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models

The proposed Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against adversarial perturbations, is empirically shown to be consistently effective against different attack methods and improves on existing defense strategies.

Boosting Adversarial Attacks with Momentum

A broad class of momentum-based iterative algorithms to boost adversarial attacks by integrating the momentum term into the iterative process for attacks, which can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples.

Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness Against Adversarial Attack

Parametric-Noise-Injection (PNI) is proposed which involves trainable Gaussian noise injection at each layer on either activation or weights through solving the Min-Max optimization problem, embedded with adversarial training, and effectively improves DNN's robustness against adversarial attack.

Generating Adversarial Examples with Adversarial Networks

Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks, and have placed the first with 92.76% accuracy on a public MNIST black-box attack challenge.

Feature Denoising for Improving Adversarial Robustness

It is suggested that adversarial perturbations on images lead to noise in the features constructed by these networks, and new network architectures are developed that increase adversarial robustness by performing feature denoising.

Mitigating adversarial effects through randomization

This paper proposes to utilize randomization at inference time to mitigate adversarial effects, and uses two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input image in a random manner.

Stochastic Activation Pruning for Robust Adversarial Defense

Stochastic Activation Pruning (SAP) is proposed, a mixed strategy for adversarial defense that prunes a random subset of activations (preferentially pruning those with smaller magnitude) and scales up the survivors to compensate.

A Direct Approach to Robust Deep Learning Using Adversarial Networks

This paper model the adversarial noise using a generative network, trained jointly with a classification discriminative network as a minimax game, and shows empirically that this adversarial network approach works well against black box attacks, with performance on par with state-of-art methods such as ensemble adversarial training and adversarialTraining with projected gradient descent.

Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks

The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN, and analytically investigates the generalizability and robustness properties granted by the use of defensive Distillation when training DNNs.

PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples

Adversarial perturbations of normal images are usually imperceptible to humans, but they can seriously confuse state-of-the-art machine learning models. What makes them so special in the eyes of