Towards Class-Oriented Poisoning Attacks Against Neural Networks
@article{Zhao2020TowardsCP, title={Towards Class-Oriented Poisoning Attacks Against Neural Networks}, author={Bingyin Zhao and Yingjie Lao}, journal={2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, year={2020}, pages={2244-2253} }
Poisoning attacks on machine learning systems compromise the model performance by deliberately injecting malicious samples in the training dataset to influence the training process. Prior works focus on either availability attacks (i.e., lowering the overall model accuracy) or integrity attacks (i.e., enabling specific instance based backdoor). In this paper, we advance the adversarial objectives of the availability attacks to a per-class basis, which we refer to as class-oriented poisoning…
Figures and Tables from this paper
4 Citations
CLPA: Clean-Label Poisoning Availability Attacks Using Generative Adversarial Nets
- Computer ScienceAAAI
- 2022
A clean-label approach, CLPA, for the poisoning availability attack is proposed and it is revealed that due to the intrinsic imperfection of classifiers, naturally misclassified inputs can be considered as a special type of poisoned data, which is referred to as "natural poisoned data''.
A Survey of Neural Trojan Attacks and Defenses in Deep Learning
- Computer ScienceArXiv
- 2022
A comprehensive review of the techniques that devise Trojan attacks for deep learning and explore their defenses, and provides a comprehensible gateway to the broader community to understand the recent developments in Neural Trojans.
Defending Evasion Attacks via Adversarially Adaptive Training
- Computer Science2022 IEEE International Conference on Big Data (Big Data)
- 2022
A novel adversarially adaptive defense (AAD) framework based on adaptive training such that the trained prediction and detection models adapt at test time to new attacks.
Triggerability of Backdoor Attacks in Multi-Source Transfer Learning-based Intrusion Detection
- Computer Science2022 IEEE/ACM International Conference on Big Data Computing, Applications and Technologies (BDCAT)
- 2022
Backdoor attacks on multis-source transfer learning models are feasible, although having less impact compared to backdoors on traditional machine learning models.
References
SHOWING 1-10 OF 47 REFERENCES
Poisoning Attacks with Generative Adversarial Nets
- Computer ScienceArXiv
- 2019
A novel generative model is introduced to craft systematic poisoning attacks against machine learning classifiers generating adversarial training examples, i.e. samples that look like genuine data points but that degrade the classifier's accuracy when used for training.
Generative Poisoning Attack Method Against Neural Networks
- Computer ScienceArXiv
- 2017
This work first examines the possibility of applying traditional gradient-based method to generate poisoned data against NNs by leveraging the gradient of the target model w.r.t. the normal data, and proposes a generative method to accelerate the generation rate of the poisoned data.
Adversarial Examples Make Strong Poisons
- Computer ScienceNeurIPS
- 2021
The method, adversarial poisoning, is substantially more effective than existing poisoning methods for secure dataset release, and a poisoned version of ImageNet is released to encourage research into the strength of this form of data obfuscation.
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
- Computer ScienceAISec@CCS
- 2017
This work proposes a novel poisoning algorithm based on the idea of back-gradient optimization, able to target a wider class of learning algorithms, trained with gradient-based procedures, including neural networks and deep learning architectures, and empirically evaluates its effectiveness on several application examples.
Data Poisoning Attacks against Online Learning
- Computer Science, MathematicsArXiv
- 2018
A systematic investigation of data poisoning attacks for online learning is initiated, and a general attack strategy is proposed, formulated as an optimization problem, that applies to both settings with some modifications.
Certified Defenses for Data Poisoning Attacks
- Computer ScienceNIPS
- 2017
This work addresses the worst-case loss of a defense in the face of a determined attacker by constructing approximate upper bounds on the loss across a broad family of attacks, for defenders that first perform outlier removal followed by empirical risk minimization.
Stronger data poisoning attacks break data sanitization defenses
- Computer ScienceMachine Learning
- 2021
Three attacks are developed that can bypass a broad range of common data sanitization defenses, including anomaly detectors based on nearest neighbors, training loss, and singular-value decomposition, and the Karush–Kuhn–Tucker conditions.
Hidden Trigger Backdoor Attacks
- Computer ScienceAAAI
- 2020
This work proposes a novel form of backdoor attack where poisoned data look natural with correct labels and also more importantly, the attacker hides the trigger in the poisoned data and keeps the trigger secret until the test time.
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
- Computer ScienceNeurIPS
- 2018
This paper explores poisoning attacks on neural nets using "clean-labels", an optimization-based method for crafting poisons, and shows that just one single poison image can control classifier behavior when transfer learning is used.
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
- Computer Science2018 IEEE Symposium on Security and Privacy (SP)
- 2018
A theoretically-grounded optimization framework specifically designed for linear regression and its effectiveness on a range of datasets and models is demonstrated and formal guarantees about its convergence and an upper bound on the effect of poisoning attacks when the defense is deployed are provided.