Defending Against Adversarial Attack Towards Deep Neural Networks Via Collaborative Multi-Task Training

@article{Wang2022DefendingAA,
  title={Defending Against Adversarial Attack Towards Deep Neural Networks Via Collaborative Multi-Task Training},
  author={Derui Wang and Chaoran Li and Sheng Wen and Surya Nepal and Yang Xiang},
  journal={IEEE Transactions on Dependable and Secure Computing},
  year={2022},
  volume={19},
  pages={953-965}
}
Deep neural networks (DNNs) are known to be vulnerable to adversarial examples which contain human-imperceptible perturbations. A series of defending methods, either proactive defence or reactive defence, have been proposed in the recent years. However, most of the methods can only handle specific attacks. For example, proactive defending methods are invalid against grey-box or white-box attacks, while reactive defending methods are challenged by low-distortion adversarial examples or… 

Defending against sparse adversarial attacks using impulsive noise reduction filters

The experimental results obtained on German Traffic Sign Recognition Benchmark have proven that the denoising filters provide high robustness against sparse adversarial attacks and do not significantly decrease the classification performance on non-altered data.

Adversarial Machine Learning in Image Classification: A Survey Toward the Defender’s Perspective

Novel taxonomies for categorizing adversarial attacks and defenses are introduced, as well as possible reasons regarding the existence of adversarial examples are discussed, to provide a defender's perspective on Adversarial Machine Learning in Image Classification.

RobustSense: Defending Adversarial Attack for Secure Device-Free Human Activity Recognition

RobustSense aims to achieve consistent predictions regardless of whether there exists an attack on its input or not, alleviating the negative effect of distribution perturbation caused by adversarial attacks, and can enhance the model robustness of existing deep models, overcoming possible attacks.

Compressive Sensing Based Adaptive Defence Against Adversarial Images

Experimental results against five state-of-the-art white box attacks on MNIST and CIFAR-10 show that the proposed CAD algorithm achieves excellent classification accuracy and generates good quality reconstructed image with much lower computation.

Detecting Adversarial Perturbations in Multi-Task Perception

A novel adversarial perturbation detection scheme based on multi-task perception of complex vision tasks (i.e., depth estimation and semantic segmentation) and develops a novel edge consistency loss between all three modalities, thereby improving their initial consistency which supports the detection scheme.

Adversarial Attacks and Defenses for Social Network Text Processing Applications: Techniques, Challenges and Future Research Directions

A comprehensive review of the main approaches for adversarial attacks and defenses in the context of social media applications with a particular focus on key challenges and future research directions is provided.

Fast Adversarial Training for Deep Neural Networks

The aim of the thesis is to review algorithms written in Python language for models robust to adversarial attacks and try to apply to them fast training techniques to improve computational time.

Algorithms for Detecting and Preventing Attacks on Machine Learning Models in Cyber-Security Problems

An overview of attack technologies on the models and training datasets for the purpose of destructive (poisoning) effect and a comparative analysis of cyber-resistance of various models, most frequently used in operating systems, to destructive information actions has been prepared.

No Classifier Left Behind: An In-depth Study of the RBF SVM Classifier's Vulnerability to Image Extraction Attacks via Confidence Information Exploitation

This work uses the RBFSVM classifier to show that it can extract individual training images from models trained on thousands of images, which refutes the notion that these attacks can only extract an “average” of each class.

References

SHOWING 1-10 OF 42 REFERENCES

MagNet: A Two-Pronged Defense against Adversarial Examples

MagNet, a framework for defending neural network classifiers against adversarial examples, is proposed and it is shown empirically that MagNet is effective against the most advanced state-of-the-art attacks in blackbox and graybox scenarios without sacrificing false positive rate on normal examples.

Towards Deep Learning Models Resistant to Adversarial Attacks

This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.

Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks

The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN, and analytically investigates the generalizability and robustness properties granted by the use of defensive Distillation when training DNNs.

Stochastic Activation Pruning for Robust Adversarial Defense

Stochastic Activation Pruning (SAP) is proposed, a mixed strategy for adversarial defense that prunes a random subset of activations (preferentially pruning those with smaller magnitude) and scales up the survivors to compensate.

Towards Evaluating the Robustness of Neural Networks

It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.

On Detecting Adversarial Perturbations

It is shown empirically that adversarial perturbations can be detected surprisingly well even though they are quasi-imperceptible to humans.

Delving into Transferable Adversarial Examples and Black-box Attacks

This work is the first to conduct an extensive study of the transferability over large models and a large scale dataset, and it is also theFirst to study the transferabilities of targeted adversarial examples with their target labels.

Towards Deep Neural Network Architectures Robust to Adversarial Examples

Deep Contractive Network is proposed, a model with a new end-to-end training procedure that includes a smoothness penalty inspired by the contractive autoencoder (CAE) to increase the network robustness to adversarial examples, without a significant performance penalty.

Practical Black-Box Attacks against Machine Learning

This work introduces the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge, and finds that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.

PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples

Adversarial perturbations of normal images are usually imperceptible to humans, but they can seriously confuse state-of-the-art machine learning models. What makes them so special in the eyes of