Corpus ID: 209394131

Subpopulation Data Poisoning Attacks

@article{Jagielski2020SubpopulationDP,
  title={Subpopulation Data Poisoning Attacks},
  author={Matthew Jagielski and Giorgio Severi and Niklas Pousette Harger and Alina Oprea},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.14026}
}
Machine learning (ML) systems are deployed in critical settings, but they might fail in unexpected ways, impacting the accuracy of their predictions. Poisoning attacks against ML induce adversarial modification of data used by an ML algorithm to selectively change the output of the ML algorithm when it is deployed. In this work, we introduce a novel data poisoning attack called a \emph{subpopulation attack}, which is particularly relevant when datasets are large and diverse. We design a modular… Expand
Property Inference From Poisoning
Model-Targeted Poisoning Attacks: Provable Convergence and Certified Bounds
Active Learning Under Malicious Mislabeling and Poisoning Attacks
Customizing Triggers with Concealed Data Poisoning
Machine Learning Integrity and Privacy in Adversarial Environments
TextFlint: Unified Multilingual Robustness Evaluation Toolkit for Natural Language Processing
Concealed Data Poisoning Attacks on NLP Models
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
...
1
2
...

References

SHOWING 1-10 OF 67 REFERENCES
Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
ANTIDOTE: understanding and defending against poisoning of anomaly detectors
Evasion Attacks against Machine Learning at Test Time
Poisoning Attacks against Support Vector Machines
Membership Inference Attacks Against Machine Learning Models
Failure Modes in Machine Learning Systems
...
1
2
3
4
5
...