• Corpus ID: 239768614

Fairness Degrading Adversarial Attacks Against Clustering Algorithms

  title={Fairness Degrading Adversarial Attacks Against Clustering Algorithms},
  author={Anshuman Chhabra and Adish Kumar Singla and Prasant Mohapatra},
Clustering algorithms are ubiquitous in modern data science pipelines, and are utilized in numerous fields ranging from biology to facility location. Due to their widespread use, especially in societal resource allocation problems, recent research has aimed at making clustering algorithms fair, with great success. Furthermore, it has also been shown that clustering algorithms, much like other machine learning algorithms, are susceptible to adversarial attacks where a malicious entity seeks to… 

Tables from this paper

Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks

This work demonstrates the vulnerability of GNN models to adversarial fairness attacks, and presents evidence for the existence and effectiveness of adversarial attacks on graph neural networks (GNNs) that aim to degrade fairness.

Robust Fair Clustering: A Novel Fairness Attack and Defense Framework

Consensus Fair Clustering (CFC), the first robust fair clustering approach that transforms consensus clustering into a fair graph partitioning problem, and iteratively learns to generate fair cluster outputs, is proposed.



Suspicion-Free Adversarial Attacks on Clustering Algorithms

This paper proposes a black-box adversarial attack for clustering models for linearly separable clusters, and theoretically proves the first work that generates spill-over adversarial samples without the knowledge of the true metric.

A Black-box Adversarial Attack for Poisoning Clustering

Is data clustering in adversarial settings secure?

It is shown that an attacker may significantly poison the whole clustering process by adding a relatively small percentage of attack samples to the input data, and that some attack samples may be obfuscated to be hidden within some existing clusters.

Poisoning Attacks on Algorithmic Fairness

This work introduces an optimization framework for poisoning attacks against algorithmic fairness, and develops a gradient-based poisoning attack aimed at introducing classification disparities among different groups in the data.

Attacking DBSCAN for Fun and Profit

This work explores how an attacker can subvert DBSCAN, a popular density-based clustering algorithm, and explores a “confidence attack,” where an adversary seeks to poison the clusters to the point that the defender loses confidence in the utility of the system.

An Overview of Fairness in Clustering

This survey aims to provide researchers with an organized overview of the field, and motivate new and unexplored lines of research regarding fairness in clustering, and to bridge the gap by categorizing existing research on fair clustering.

Fair Clustering Through Fairlets

It is shown that any fair clustering problem can be decomposed into first finding good fairlets, and then using existing machinery for traditional clustering algorithms, and while finding goodFairlets can be NP-hard, they can be obtained by efficient approximation algorithms based on minimum cost flow.

Making Existing Clusterings Fairer: Algorithms, Complexity Results and Insights

This work forms the minimal cluster modification for fairness (MCMF) problem where the input is a given partitional clustering and the goal is to minimally change it so that the clustering is still of good quality and fairer.

Clustering with Fairness Constraints: A Flexible and Scalable Approach

This study investigates a general variational formulation of fair clustering, which can integrate fairness constraints with a large class of clustering objectives and reveals that fairness does not come at a significant cost of the clustering objective.

Guarantees for Spectral Clustering with Fairness Constraints

This work develops variants of both normalized and unnormalized constrained SC and shows that they help find fairer clusterings on both synthetic and real data and proves that their algorithms can recover this fair clustering with high probability.