Corpus ID: 236170854

Adversarial for Good? How the Adversarial ML Community's Values Impede Socially Beneficial Uses of Attacks

@article{Albert2021AdversarialFG,
  title={Adversarial for Good? How the Adversarial ML Community's Values Impede Socially Beneficial Uses of Attacks},
  author={Kendra Albert and Maggie K. Delano and Bogdan Kulynych and Ram Shankar Siva Kumar},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.10302}
}
Attacks from adversarial machine learning (ML) have the potential to be used "for good": they can be used to run counter to the existing power structures within ML, creating breathing space for those who would otherwise be the targets of surveillance and control. But most research on adversarial ML has not engaged in developing tools for resistance against ML systems. Why? In this paper, we review the broader impact statements that adversarial ML researchers wrote as part of their NeurIPS 2020… Expand

Tables from this paper

References

SHOWING 1-10 OF 126 REFERENCES
Politics of Adversarial Machine Learning
TLDR
In this paper, insights from science and technology studies, anthropology, and human rights literature are drawn on to inform how defenses against adversarial attacks can be used to suppress dissent and limit attempts to investigate machine learning systems. Expand
Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges
  • Jinyuan Jia, N. Gong
  • Computer Science, Mathematics
  • Adaptive Autonomous Secure Cyber Systems
  • 2020
TLDR
This chapter takes defending against inference attacks in online social networks as an example to illustrate the opportunities and challenges of defending against ML-equipped inference attacks via adversarial examples. Expand
Ethical Testing in the Real World: Evaluating Physical Testing of Adversarial Machine Learning
TLDR
This paper critically assesses the adequacy and representativeness of physical domain testing for various adversarial machine learning (ML) attacks against computer vision systems involving human subjects and explores various barriers to more inclusive physical testing in adversarial ML. Expand
Practical No-box Adversarial Attacks against DNNs
TLDR
This work investigates no-box adversarial examples, where the attacker can neither access the model information or the training set nor query the model, and proposes three mechanisms for training with a very small dataset and finds that prototypical reconstruction is the most effective. Expand
Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality
TLDR
This work proves convergence to low robust training loss for polynomial width instead of exponential, under natural assumptions and with the ReLU activation, and shows that ReLU networks near initialization can approximate the step function, which may be of independent interest. Expand
An Efficient Adversarial Attack for Tree Ensembles
TLDR
Experimental results on several large GBDT and RF models with up to hundreds of trees demonstrate that the method can be thousands of times faster than the previous mixed-integer linear programming (MILP) based approach, while also providing smaller (better) adversarial examples than decision-based black-box attacks on general $\ell_p$ ($p=1, 2, \infty$) norm perturbations. Expand
POTs: protective optimization technologies
TLDR
The focus on algorithms' inputs and outputs misses harms that arise from systems interacting with the world; that the focus on bias and discrimination omits broader harms on populations and their environments; and that relying on service providers excludes scenarios where they are not cooperative or intentionally adversarial. Expand
Attack of the Tails: Yes, You Really Can Backdoor Federated Learning
TLDR
Evidence is provided that, in the general case, robustness to backdoors implies model robusts to adversarial examples, and that detecting the presence of a backdoor in a FL model is unlikely assuming first order oracles or polynomial time. Expand
Robust Pre-Training by Adversarial Contrastive Learning
TLDR
This work improves robustness-aware self-supervised pre-training by learning representations that are consistent under both data augmentations and adversarial perturbations, and shows that ACL pre- training can improve semi- supervised adversarial training, even when only a few labeled examples are available. Expand
Membership Inference Attacks Against Machine Learning Models
TLDR
This work quantitatively investigates how machine learning models leak information about the individual data records on which they were trained and empirically evaluates the inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Expand
...
1
2
3
4
5
...