Fairness Degrading Adversarial Attacks Against Clustering Algorithms
@article{Chhabra2021FairnessDA, title={Fairness Degrading Adversarial Attacks Against Clustering Algorithms}, author={Anshuman Chhabra and Adish Kumar Singla and Prasant Mohapatra}, journal={ArXiv}, year={2021}, volume={abs/2110.12020} }
Clustering algorithms are ubiquitous in modern data science pipelines, and are utilized in numerous fields ranging from biology to facility location. Due to their widespread use, especially in societal resource allocation problems, recent research has aimed at making clustering algorithms fair, with great success. Furthermore, it has also been shown that clustering algorithms, much like other machine learning algorithms, are susceptible to adversarial attacks where a malicious entity seeks to…
Tables from this paper
2 Citations
Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks
- Computer ScienceArXiv
- 2022
This work demonstrates the vulnerability of GNN models to adversarial fairness attacks, and presents evidence for the existence and effectiveness of adversarial attacks on graph neural networks (GNNs) that aim to degrade fairness.
Robust Fair Clustering: A Novel Fairness Attack and Defense Framework
- Computer ScienceArXiv
- 2022
Consensus Fair Clustering (CFC), the first robust fair clustering approach that transforms consensus clustering into a fair graph partitioning problem, and iteratively learns to generate fair cluster outputs, is proposed.
References
SHOWING 1-10 OF 42 REFERENCES
Suspicion-Free Adversarial Attacks on Clustering Algorithms
- Computer ScienceAAAI
- 2020
This paper proposes a black-box adversarial attack for clustering models for linearly separable clusters, and theoretically proves the first work that generates spill-over adversarial samples without the knowledge of the true metric.
Is data clustering in adversarial settings secure?
- Computer ScienceAISec
- 2013
It is shown that an attacker may significantly poison the whole clustering process by adding a relatively small percentage of attack samples to the input data, and that some attack samples may be obfuscated to be hidden within some existing clusters.
Poisoning Attacks on Algorithmic Fairness
- Computer ScienceECML/PKDD
- 2020
This work introduces an optimization framework for poisoning attacks against algorithmic fairness, and develops a gradient-based poisoning attack aimed at introducing classification disparities among different groups in the data.
Attacking DBSCAN for Fun and Profit
- Computer ScienceSDM
- 2015
This work explores how an attacker can subvert DBSCAN, a popular density-based clustering algorithm, and explores a “confidence attack,” where an adversary seeks to poison the clusters to the point that the defender loses confidence in the utility of the system.
An Overview of Fairness in Clustering
- Computer ScienceIEEE Access
- 2021
This survey aims to provide researchers with an organized overview of the field, and motivate new and unexplored lines of research regarding fairness in clustering, and to bridge the gap by categorizing existing research on fair clustering.
Fair Clustering Through Fairlets
- Computer ScienceNIPS
- 2017
It is shown that any fair clustering problem can be decomposed into first finding good fairlets, and then using existing machinery for traditional clustering algorithms, and while finding goodFairlets can be NP-hard, they can be obtained by efficient approximation algorithms based on minimum cost flow.
Making Existing Clusterings Fairer: Algorithms, Complexity Results and Insights
- Computer ScienceAAAI
- 2020
This work forms the minimal cluster modification for fairness (MCMF) problem where the input is a given partitional clustering and the goal is to minimally change it so that the clustering is still of good quality and fairer.
Clustering with Fairness Constraints: A Flexible and Scalable Approach
- Computer ScienceArXiv
- 2019
This study investigates a general variational formulation of fair clustering, which can integrate fairness constraints with a large class of clustering objectives and reveals that fairness does not come at a significant cost of the clustering objective.
Guarantees for Spectral Clustering with Fairness Constraints
- Computer ScienceICML
- 2019
This work develops variants of both normalized and unnormalized constrained SC and shows that they help find fairer clusterings on both synthetic and real data and proves that their algorithms can recover this fair clustering with high probability.