Active Learning Under Malicious Mislabeling and Poisoning Attacks

@article{Lin2021ActiveLU,
  title={Active Learning Under Malicious Mislabeling and Poisoning Attacks},
  author={Jing Lin and Ryan S. Luley and Kaiqi Xiong},
  journal={2021 IEEE Global Communications Conference (GLOBECOM)},
  year={2021},
  pages={1-6}
}
Deep neural networks usually require large labeled datasets for training to achieve state-of-the-art performance in many tasks, such as image classification and natural language processing. Although a lot of data is created each day by active Internet users, most of these data are unlabeled and are vulnerable to data poisoning attacks. In this paper, we develop an efficient active learning method that requires fewer labeled instances and incorporates the technique of adversarial retraining in… 
Applying the Mahalanobis Distance to Develop Robust Approaches Against False Data Injection Attacks on Dynamic Power State Estimation
TLDR
This research proposes two robust defense approaches against the above three efficient FDI attacks on DSE and mathematically proves that the Mahalanobis distance is not only useful but also much better than the Euclidean distance in the consistency check of power sensor measurements.
ML Attack Models: Adversarial Attacks and Data Poisoning Attacks
TLDR
This work presents a new approach to attack models that addresses the challenge of directly simulating the dynamic response of the immune system in the face of attackers.

References

SHOWING 1-10 OF 46 REFERENCES
ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models
TLDR
An effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN is proposed, sparing the need for training substitute models and avoiding the loss in attack transferability.
One Pixel Attack for Fooling Deep Neural Networks
TLDR
This paper proposes a novel method for generating one-pixel adversarial perturbations based on differential evolution (DE), which requires less adversarial information (a black-box attack) and can fool more types of networks due to the inherent features of DE.
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
TLDR
A new "polytope attack" is proposed in which poison images are designed to surround the targeted image in feature space, and it is demonstrated that using Dropout during poison creation helps to enhance transferability of this attack.
Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers
TLDR
This work builds upon prior backdoor data-poisoning research for ML image classifiers and systematically assesses different experimental conditions including types oftrigger patterns, persistence of trigger patterns during retraining, poisoning strategies, architectures, datasets, and potential defensive regularization techniques.
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
TLDR
This paper presents the first comprehensive survey on adversarial attacks on deep learning in computer vision, reviewing the works that design adversarial attack, analyze the existence of such attacks and propose defenses against them.
Guarantees on learning depth-2 neural networks under a data-poisoning attack
TLDR
This work demonstrates a specific class of neural networks of finite size and a non-gradient stochastic algorithm which tries to recover the weights of the net generating the realizable true labels in the presence of an oracle doing a bounded amount of malicious additive distortion to the labels.
Adversarial Active Learning for Deep Networks: a Margin Based Approach
TLDR
It is demonstrated empirically that adversarial active queries yield faster convergence of CNNs trained on MNIST, the Shoe-Bag and the Quick-Draw datasets.
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
TLDR
This paper explores poisoning attacks on neural nets using "clean-labels", an optimization-based method for crafting poisons, and shows that just one single poison image can control classifier behavior when transfer learning is used.
Subpopulation Data Poisoning Attacks
TLDR
It is proved that, under some assumptions, subpopulation attacks are impossible to defend against, and empirically demonstrate the limitations of existing defenses against the authors' attacks, highlighting the difficulty of protecting machine learning against this threat.
...
...