• Corpus ID: 167217608

ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation

@article{Yang2019MENetTE,
  title={ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation},
  author={Yuzhe Yang and Guo Zhang and Dina Katabi and Zhi Xu},
  journal={ArXiv},
  year={2019},
  volume={abs/1905.11971}
}
Deep neural networks are vulnerable to adversarial attacks. [] Key Method In ME-Net, images are preprocessed using two steps: first pixels are randomly dropped from the image; then, the image is reconstructed using ME. We show that this process destroys the adversarial structure of the noise, while re-enforcing the global structure in the original image. Since humans typically rely on such global structures in classifying images, the process makes the network mode compatible with human perception. We conduct…

Detect and Defense Against Adversarial Examples in Deep Learning using Natural Scene Statistics and Adaptive Denoising

TLDR
The experimental results show that the proposed defense method outperforms the state-of-theart defense techniques by improving the robustness against a set of attacks under black-box, gray-box and A.dz white-box settings.

A Data Augmentation-based Defense Method Against Adversarial Attacks in Neural Networks

TLDR
This paper focuses on developing a lightweight defense method that can efficiently invalidate full whitebox adversarial attacks with the compatibility of real-life constraints and demonstrates outstanding robustness and efficiency.

Threat Model-Agnostic Adversarial Defense using Diffusion Models

TLDR
The defense relies on an addition of Gaussian i.i.d noise to the attacked image, followed by a pretrained diffusion process – an architecture that performs a stochastic iterative process over a denoising network, yielding a high perceptual quality denoised outcome.

Structure-Preserving Progressive Low-rank Image Completion for Defending Adversarial Attacks

TLDR
This work proposes a structure-preserving progressive low-rank image completion (SPLIC) method to remove unneeded texture details from the input images and shift the bias of deep neural networks towards global object structures and semantic cues and significantly improves the adversarial robustness of the network.

Unsupervised Perturbation based Self-Supervised Adversarial Training

  • Zhuoyi WangYu Lin B. Thuraisingham
  • Computer Science
    2021 7th IEEE Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS)
  • 2021
TLDR
An instance-level unsupervised perturbation is proposed to replace the supervised class-level adversarial sample in the robust training, and the contrastive learning based adversarial learning(UPAT), which maximizes the agreement between the transformed instance with its corresponding un supervised perturbed output, and encourages the model to suppress the vulnerability in the embedding space.

Diffusion Models for Adversarial Purification

TLDR
This work proposes DiffPure that uses diffusion models for adversarial purification, and proposes to use the adjoint method to compute full gradients of the reverse generative process to evaluate the method against strong adaptive attacks.

A Neuro-Inspired Autoencoding Defense Against Adversarial Attacks

TLDR
This paper investigates a radically different, neuro-inspired defense mechanism, aiming to reject adversarial perturbations before they reach a classifier DNN, using an encoder with characteristics commonly observed in biological vision, followed by a decoder restoring image dimensions that can be cascaded with standard CNN architectures.

Defensive Tensorization

TLDR
Defensive tensorization, an adversarial defence technique that leverages a latent high-order factorization of the network, improves robustness in the face of adversarial attacks for both binary and real-valued networks.

Guided Diffusion Model for Adversarial Purification

TLDR
The core of the approach is to embed purification into the diffusion-denoising process of a Denoised Diffusion Probabilistic Model (DDPM), so that its diffusion process could submerge the adversarial perturbations with gradually added Gaussian noises, and both of these noises can be simultaneously removed following a guided denoisation process.

Adversarial Adaptive Neighborhood With Feature Importance-Aware Convex Interpolation

TLDR
A new method is introduced, which uses correct predicted samples in disjoint classes to guide the generation of more explainable adversarial samples in the ambiguous region around the decision boundary instead of uncontrolled “blind spots”, via convex combination in a feature component-wise manner which takes the individual importance of feature ingredients into account.
...

References

SHOWING 1-10 OF 37 REFERENCES

PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples

Adversarial perturbations of normal images are usually imperceptible to humans, but they can seriously confuse state-of-the-art machine learning models. What makes them so special in the eyes of

Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models

TLDR
The proposed Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against adversarial perturbations, is empirically shown to be consistently effective against different attack methods and improves on existing defense strategies.

Mitigating adversarial effects through randomization

TLDR
This paper proposes to utilize randomization at inference time to mitigate adversarial effects, and uses two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input image in a random manner.

Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks

TLDR
Two feature squeezing methods are explored: reducing the color bit depth of each pixel and spatial smoothing, which are inexpensive and complementary to other defenses, and can be combined in a joint detection framework to achieve high detection rates against state-of-the-art attacks.

Towards Deep Learning Models Resistant to Adversarial Attacks

TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.

Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey

TLDR
This paper presents the first comprehensive survey on adversarial attacks on deep learning in computer vision, reviewing the works that design adversarial attack, analyze the existence of such attacks and propose defenses against them.

Countering Adversarial Images using Input Transformations

TLDR
This paper investigates strategies that defend against adversarial-example attacks on image-classification systems by transforming the inputs before feeding them to the system, and shows that total variance minimization and image quilting are very effective defenses in practice, when the network is trained on transformed images.

Improving Adversarial Robustness by Data-Specific Discretization

TLDR
Systematic evaluation demonstrates that the proposed gradient-masking preprocessing technique is effective in improving adversarial robustness on MNIST, CIFAR-10, and ImageNet, for either naturally or adversarially trained models.

Towards Evaluating the Robustness of Neural Networks

TLDR
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.

Certified Robustness to Adversarial Examples with Differential Privacy

TLDR
This paper presents the first certified defense that both scales to large networks and datasets and applies broadly to arbitrary model types, based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired privacy formalism.