Towards Class-Oriented Poisoning Attacks Against Neural Networks
@article{Zhao2020TowardsCP, title={Towards Class-Oriented Poisoning Attacks Against Neural Networks}, author={Bingyin Zhao and Yingjie Lao}, journal={2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, year={2020}, pages={2244-2253} }
Poisoning attacks on machine learning systems compromise the model performance by deliberately injecting malicious samples in the training dataset to influence the training process. Prior works focus on either availability attacks (i.e., lowering the overall model accuracy) or integrity attacks (i.e., enabling specific instance based backdoor). In this paper, we advance the adversarial objectives of the availability attacks to a per-class basis, which we refer to as class-oriented poisoning…
Figures and Tables from this paper
2 Citations
CLPA: Clean-Label Poisoning Availability Attacks Using Generative Adversarial Nets
- Computer ScienceAAAI
- 2022
A clean-label approach, CLPA, for the poisoning availability attack is proposed and it is revealed that due to the intrinsic imperfection of classifiers, naturally misclassified inputs can be considered as a special type of poisoned data, which is referred to as "natural poisoned data''.
A Survey of Neural Trojan Attacks and Defenses in Deep Learning
- Computer ScienceArXiv
- 2022
A comprehensive review of the techniques that devise Trojan attacks for deep learning and explore their defenses, and provides a comprehensible gateway to the broader community to understand the recent developments in Neural Trojans.
References
SHOWING 1-10 OF 47 REFERENCES
Adversarial Examples Make Strong Poisons
- Computer ScienceNeurIPS
- 2021
The method, adversarial poisoning, is substantially more effective than existing poisoning methods for secure dataset release, and a poisoned version of ImageNet is released to encourage research into the strength of this form of data obfuscation.
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
- Computer ScienceAISec@CCS
- 2017
This work proposes a novel poisoning algorithm based on the idea of back-gradient optimization, able to target a wider class of learning algorithms, trained with gradient-based procedures, including neural networks and deep learning architectures, and empirically evaluates its effectiveness on several application examples.
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
- Computer ScienceArXiv
- 2017
This work considers a new type of attacks, called backdoor attacks, where the attacker's goal is to create a backdoor into a learning-based authentication system, so that he can easily circumvent the system by leveraging the backdoor.
Data Poisoning Attacks against Online Learning
- Computer Science, MathematicsArXiv
- 2018
A systematic investigation of data poisoning attacks for online learning is initiated, and a general attack strategy is proposed, formulated as an optimization problem, that applies to both settings with some modifications.
Certified Defenses for Data Poisoning Attacks
- Computer ScienceNIPS
- 2017
This work addresses the worst-case loss of a defense in the face of a determined attacker by constructing approximate upper bounds on the loss across a broad family of attacks, for defenders that first perform outlier removal followed by empirical risk minimization.
Stronger data poisoning attacks break data sanitization defenses
- Computer ScienceMachine Learning
- 2021
Three attacks are developed that can bypass a broad range of common data sanitization defenses, including anomaly detectors based on nearest neighbors, training loss, and singular-value decomposition, and the Karush–Kuhn–Tucker conditions.
Hidden Trigger Backdoor Attacks
- Computer ScienceAAAI
- 2020
This work proposes a novel form of backdoor attack where poisoned data look natural with correct labels and also more importantly, the attacker hides the trigger in the poisoned data and keeps the trigger secret until the test time.
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
- Computer ScienceNeurIPS
- 2018
This paper explores poisoning attacks on neural nets using "clean-labels", an optimization-based method for crafting poisons, and shows that just one single poison image can control classifier behavior when transfer learning is used.
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
- Computer Science2018 IEEE Symposium on Security and Privacy (SP)
- 2018
A theoretically-grounded optimization framework specifically designed for linear regression and its effectiveness on a range of datasets and models is demonstrated and formal guarantees about its convergence and an upper bound on the effect of poisoning attacks when the defense is deployed are provided.
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
- Computer ScienceICML
- 2019
A new "polytope attack" is proposed in which poison images are designed to surround the targeted image in feature space, and it is demonstrated that using Dropout during poison creation helps to enhance transferability of this attack.