• Corpus ID: 245853773

FairEdit: Preserving Fairness in Graph Neural Networks through Greedy Graph Editing

  title={FairEdit: Preserving Fairness in Graph Neural Networks through Greedy Graph Editing},
  author={Donald Loveland and Jiayi Pan and Aaresh Farrokh Bhathena and Yiyang Lu},
Graph Neural Networks (GNNs) have proven to excel in predictive modeling tasks where the underlying data is a graph. However, as GNNs are extensively used in human-centered applications, the issue of fairness has arisen. While edge deletion is a common method used to promote fairness in GNNs, it fails to consider when data is inherently missing fair connections. In this work we consider the unexplored method of edge addition, accompanied by deletion, to promote fairness. We propose two model… 

Figures and Tables from this paper

Subgroup Fairness in Graph-based Spam Detection
This paper designs a model that jointly infers the hidden subgroup memberships and exploits the membership for calibrating the target GNN’s detection accuracy across subgroups and demonstrates that the proposed model can be trained to treat the subgroups more fairly.


Biased Edge Dropout for Enhancing Fairness in Graph Representation Learning
This paper proposes a biased edge dropout algorithm (FairDrop) to counter-act homophily and improve fairness in graph representation learning, and proposes a new dyadic group definition to measure the bias of a link prediction task when paired with group-based fairness metrics.
Fairwalk: Towards Fair Graph Embedding
This paper proposes a fairness-aware embedding method, namely Fairwalk, which extends node2vec, and demonstrates that Fairwalk reduces bias under multiple fairness metrics while still preserving the utility.
Predict then Propagate: Graph Neural Networks meet Personalized PageRank
This paper uses the relationship between graph convolutional networks (GCN) and PageRank to derive an improved propagation scheme based on personalized PageRank, and constructs a simple model, personalized propagation of neural predictions (PPNP), and its fast approximation, APPNP.
Adversarial Attacks on Neural Networks for Graph Data
This work introduces the first study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph convolutions, and generates adversarial perturbations targeting the node's features and the graph structure, taking the dependencies between instances in account.
node2vec: Scalable Feature Learning for Networks
In node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks, a flexible notion of a node's network neighborhood is defined and a biased random walk procedure is designed, which efficiently explores diverse neighborhoods.
Fairness Constraints: Mechanisms for Fair Classification
This paper introduces a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness, and shows on real-world data that this mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy.
Inductive Representation Learning on Large Graphs
GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.
Semi-Supervised Classification with Graph Convolutional Networks
A scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs which outperforms related methods by a significant margin.
A Survey on Bias and Fairness in Machine Learning
This survey investigated different real-world applications that have shown biases in various ways, and created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems.
From Parity to Preference-based Notions of Fairness in Classification
This paper draws inspiration from the fair-division and envy-freeness literature in economics and game theory and proposes preference-based notions of fairness -- any group of users would collectively prefer its treatment or outcomes, regardless of the (dis)parity as compared to the other groups.