Backdooring and Poisoning Neural Networks with Image-Scaling Attacks

@article{Quiring2020BackdooringAP,
  title={Backdooring and Poisoning Neural Networks with Image-Scaling Attacks},
  author={Erwin Quiring and Konrad Rieck},
  journal={2020 IEEE Security and Privacy Workshops (SPW)},
  year={2020},
  pages={41-47}
}
  • Erwin Quiring, K. Rieck
  • Published 19 March 2020
  • Computer Science
  • 2020 IEEE Security and Privacy Workshops (SPW)
Backdoors and poisoning attacks are a major threat to the security of machine-learning and vision systems. Often, however, these attacks leave visible artifacts in the images that can be visually detected and weaken the efficacy of the attacks. In this paper, we propose a novel strategy for hiding backdoor and poisoning attacks. Our approach builds on a recent class of attacks against image scaling. These attacks enable manipulating images such that they change their content when scaled to a… 
Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning
TLDR
This paper theoretically analyzes the attacks against image scaling from the perspective of signal processing and identifies their root cause as the interplay of downsampling and convolution, and develops a novel defense against image-scaling attacks that prevents all possible attack variants.
Rethinking Image-Scaling Attacks: The Interplay Between Vulnerabilities in Machine Learning Systems
TLDR
This paper proposes a series of novel techniques to make a black-box attack exploit vulnerabilities in scaling algorithms, scaling defenses, and the final machine learning model in an end-to-end manner and reveals that most existing scaling defenses are ineffective under threat from downstream models.
Backdoor Attack with Sample-Specific Triggers
TLDR
Inspired by the recent advance in DNN-based image steganography, sample-specific invisible additive noises as backdoor triggers are generated by encoding an attacker-specified string into benign images through an encoder-decoder network.
Invisible Backdoor Attack with Sample-Specific Triggers
TLDR
Inspired by the recent advance in DNN-based image steganography, sample-specific invisible additive noises as backdoor triggers are generated by encoding an attacker-specified string into benign images through an encoder-decoder network.
A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples
TLDR
This work shows how to jointly exploit adversarial perturbation and model poisoning vulnerabilities to practically launch a new stealthy attack, dubbed AdvTrojan, which can bypass existing defenses with a success rate close to 100% and can be extended to attack federated learning as well as high-resolution images.
Decamouflage: A Framework to Detect Image-Scaling Attacks on CNN
TLDR
An image-scaling attack detection framework, Decamouflage, consisting of three independent detection methods: scaling, filtering, and steganalysis, to detect the attack through examining distinct image characteristics, which has a pre-determined detection threshold that is generic.
Decamouflage: A Framework to Detect Image-Scaling Attacks on Convolutional Neural Networks
TLDR
This work presents an image-scaling attack detection framework, termed as Decamouflage, which can accurately detect image scaling attacks in both white-box and black-box settings with acceptable run-time overhead.
An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences
TLDR
The goal of this overview paper is to review the works published until now, classifying the different types of attacks and defences proposed so far, based on the amount of control that the attacker has on the training process, and the capability of the defender to verify the integrity of the data used for training, and to monitor the operations of the DNN at training and test time.
SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics
TLDR
This work proposes a novel defense algorithm using robust covariance estimation to amplify the spectral signature of corrupted data, providing a clean model, completely removing the backdoor, even in regimes where previous methods have no hope of detecting the poisoned examples.
Artificial Intelligence Security: Threats and Countermeasures
In recent years, with rapid technological advancement in both computing hardware and algorithm, Artificial Intelligence (AI) has demonstrated significant advantage over human being in a wide range of
...
1
2
...

References

SHOWING 1-10 OF 21 REFERENCES
Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning
TLDR
This paper theoretically analyzes the attacks against image scaling from the perspective of signal processing and identifies their root cause as the interplay of downsampling and convolution, and develops a novel defense against image-scaling attacks that prevents all possible attack variants.
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
TLDR
This work considers a new type of attacks, called backdoor attacks, where the attacker's goal is to create a backdoor into a learning-based authentication system, so that he can easily circumvent the system by leveraging the backdoor.
Seeing is Not Believing: Camouflage Attacks on Image Scaling Algorithms
TLDR
An automated attack against common scaling algorithms, i.e. to automatically generate camouflage images whose visual semantics change dramatically after scaling, is demonstrated and a few potential countermeasures from attack prevention to detection are suggested.
Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
TLDR
This work presents the first robust and generalizable detection and mitigation system for DNN backdoor attacks, and identifies multiple mitigation techniques via input filters, neuron pruning and unlearning.
Bypassing Backdoor Detection Algorithms in Deep Learning
  • T. Tan, R. Shokri
  • Computer Science, Mathematics
    2020 IEEE European Symposium on Security and Privacy (EuroS&P)
  • 2020
TLDR
An adversarial backdoor embedding algorithm that can bypass the existing detection algorithms including the state-of-the-art techniques, and an adaptive adversarial training algorithm that optimizes the original loss function of the model, and maximizes the indistinguishability of the hidden representations of poisoned data and clean data.
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
TLDR
This paper explores poisoning attacks on neural nets using "clean-labels", an optimization-based method for crafting poisons, and shows that just one single poison image can control classifier behavior when transfer learning is used.
Evasion Attacks against Machine Learning at Test Time
TLDR
This work presents a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks.
Latent Backdoor Attacks on Deep Neural Networks
TLDR
Latent backdoors are incomplete backdoors embedded into a "Teacher" model, and automatically inherited by multiple "Student" models through transfer learning, and can be quite effective in a variety of application contexts.
Towards Evaluating the Robustness of Neural Networks
TLDR
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.
Trojaning Attack on Neural Networks
TLDR
A trojaning attack on neuron networks that can be successfully triggered without affecting its test accuracy for normal input data, and it only takes a small amount of time to attack a complex neuron network model.
...
1
2
3
...