Active Learning Under Malicious Mislabeling and Poisoning Attacks
@article{Lin2021ActiveLU, title={Active Learning Under Malicious Mislabeling and Poisoning Attacks}, author={Jing Lin and Ryan S. Luley and Kaiqi Xiong}, journal={2021 IEEE Global Communications Conference (GLOBECOM)}, year={2021}, pages={1-6} }
Deep neural networks usually require large labeled datasets for training to achieve state-of-the-art performance in many tasks, such as image classification and natural language processing. Although a lot of data is created each day by active Internet users, most of these data are unlabeled and are vulnerable to data poisoning attacks. In this paper, we develop an efficient active learning method that requires fewer labeled instances and incorporates the technique of adversarial retraining in…
Figures and Tables from this paper
3 Citations
Applying the Mahalanobis Distance to Develop Robust Approaches Against False Data Injection Attacks on Dynamic Power State Estimation
- Computer ScienceArXiv
- 2021
This research proposes two robust defense approaches against the above three efficient FDI attacks on DSE and mathematically proves that the Mahalanobis distance is not only useful but also much better than the Euclidean distance in the consistency check of power sensor measurements.
Mahalanobis distance-based robust approaches against false data injection attacks on dynamic power state estimation
- MathematicsComput. Secur.
- 2021
ML Attack Models: Adversarial Attacks and Data Poisoning Attacks
- Computer ScienceArXiv
- 2021
This work presents a new approach to attack models that addresses the challenge of directly simulating the dynamic response of the immune system in the face of attackers.
References
SHOWING 1-10 OF 46 REFERENCES
ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models
- Computer ScienceAISec@CCS
- 2017
An effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN is proposed, sparing the need for training substitute models and avoiding the loss in attack transferability.
One Pixel Attack for Fooling Deep Neural Networks
- Computer ScienceIEEE Transactions on Evolutionary Computation
- 2019
This paper proposes a novel method for generating one-pixel adversarial perturbations based on differential evolution (DE), which requires less adversarial information (a black-box attack) and can fool more types of networks due to the inherent features of DE.
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
- Computer ScienceICML
- 2019
A new "polytope attack" is proposed in which poison images are designed to surround the targeted image in feature space, and it is demonstrated that using Dropout during poison creation helps to enhance transferability of this attack.
Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers
- Computer Science2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
- 2020
This work builds upon prior backdoor data-poisoning research for ML image classifiers and systematically assesses different experimental conditions including types oftrigger patterns, persistence of trigger patterns during retraining, poisoning strategies, architectures, datasets, and potential defensive regularization techniques.
Towards Deep Learning Models Resistant to Adversarial Attacks
- Computer ScienceICLR
- 2018
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
- Computer ScienceIEEE Access
- 2018
This paper presents the first comprehensive survey on adversarial attacks on deep learning in computer vision, reviewing the works that design adversarial attack, analyze the existence of such attacks and propose defenses against them.
Guarantees on learning depth-2 neural networks under a data-poisoning attack
- Computer ScienceArXiv
- 2020
This work demonstrates a specific class of neural networks of finite size and a non-gradient stochastic algorithm which tries to recover the weights of the net generating the realizable true labels in the presence of an oracle doing a bounded amount of malicious additive distortion to the labels.
Adversarial Active Learning for Deep Networks: a Margin Based Approach
- Computer ScienceArXiv
- 2018
It is demonstrated empirically that adversarial active queries yield faster convergence of CNNs trained on MNIST, the Shoe-Bag and the Quick-Draw datasets.
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
- Computer ScienceNeurIPS
- 2018
This paper explores poisoning attacks on neural nets using "clean-labels", an optimization-based method for crafting poisons, and shows that just one single poison image can control classifier behavior when transfer learning is used.
Subpopulation Data Poisoning Attacks
- Computer ScienceCCS
- 2021
It is proved that, under some assumptions, subpopulation attacks are impossible to defend against, and empirically demonstrate the limitations of existing defenses against the authors' attacks, highlighting the difficulty of protecting machine learning against this threat.