• Corpus ID: 235669723

Subgroup Generalization and Fairness of Graph Neural Networks

@inproceedings{Ma2021SubgroupGA,
  title={Subgroup Generalization and Fairness of Graph Neural Networks},
  author={Jiaqi Ma and Junwei Deng and Qiaozhu Mei},
  booktitle={NeurIPS},
  year={2021}
}
Despite enormous successful applications of graph neural networks (GNNs), theoretical understanding of their generalization ability, especially for node-level tasks where data are not independent and identically-distributed (IID), has been sparse. The theoretical investigation of the generalization performance is beneficial for understanding fundamental issues (such as fairness) of GNN models and designing better learning methods. In this paper, we present a novel PAC-Bayesian analysis for GNNs… 

FairNorm: Fair and Fast Graph Neural Network Training

TLDR
FairNorm is proposed, a unified normalization framework that reduces the bias in GNN-based learning while also providing provably faster convergence and it is empirically shown that the proposed framework leads to faster convergence compared to the naive baseline where no normalization is employed.

Fair Node Representation Learning via Adaptive Data Augmentation

TLDR
Comparison with multiple benchmarks demonstrates that the proposed augmentation strategies can improve fairness in terms of statistical parity and equal opportunity, while providing comparable utility to state-of-the-art contrastive methods.

A Survey on Fairness for Machine Learning on Graphs

TLDR
This survey is the first one dedicated to fairness for relational data and provides a comprehensive overview of recent contributions in the domain of fair machine learning for graphs, that is classify into pre-processing, in-processing and post-processing models.

Handling Distribution Shifts on Graphs: An Invariance Perspective

TLDR
A new invariant learning approach, Explore-to-Extrapolate Risk Minimization (EERM), that facilitates graph neural networks to leverage invariance principles for prediction and proves the validity of the method by theoretically showing its guarantee of a valid OOD solution.

Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks

TLDR
Experiments on empirical datasets demonstrate that adversarial fairness attacks can degrade the fairness of GNN predictions with a low perturbation rate and without a significant drop in accuracy.

FMP: Toward Fair Graph Message Passing against Topology Bias

TLDR
A Fair Message Passing (FMP) scheme is proposed to aggregate useful information from neighbors but minimize the effect of topology bias in a unified framework considering graph smoothness and fairness objectives.

Shift-Robust Node Classification via Graph Adversarial Clustering

TLDR
On large dataset with close-set shift - ogb-arxiv, existing domain adaption algorithms can barely improve the generalization if not worse, and SRNC is still able to mitigate the negative effect of the shift across different testing-time.

Out-Of-Distribution Generalization on Graphs: A Survey

TLDR
This paper is the first systematic and comprehensive review of OOD generalization on graphs and categorizes existing methods into three classes from conceptually different perspectives, i.e., data, model, and learning strategy, based on their positions in the graph machine learning pipeline.

Fairness in Graph Mining: A Survey

TLDR
A novel taxonomy of fairness notions on graphs is proposed, which sheds light on their connections and differences, and an organized summary of existing techniques that promote fairness in graph mining is presented.

Avoiding Biases due to Similarity Assumptions in Node Embeddings

TLDR
This work proposes a node embedding that makes no similarity assumptions, enables fast link prediction, and has linear complexity, and gains from avoiding assumptions do not significantly affect accuracy, as it is shown via comparisons against several existing methods on $21$ real-world networks.

References

SHOWING 1-10 OF 53 REFERENCES

CrossWalk: Fairness-enhanced Node Representation Learning

TLDR
Extensive experiments show the effectiveness of the algorithm, CrossWalk, to enhance fairness in various graph algorithms, including influence maximization, link prediction and node classification in synthetic and real networks, with only a very small decrease in performance.

Biased Edge Dropout for Enhancing Fairness in Graph Representation Learning

TLDR
This paper proposes a biased edge dropout algorithm (FairDrop) to counter-act homophily and improve fairness in graph representation learning, and proposes a new dyadic group definition to measure the bias of a link prediction task when paired with group-based fairness metrics.

A PAC-Bayesian Approach to Generalization Bounds for Graph Neural Networks

TLDR
The result reveals that the maximum node degree and the spectral norm of the weights govern the generalization bounds of both models and shows that the PACBayes bound for GCNs is a natural generalization of the results developed in (Neyshabur et al., 2017) for fully-connected and convolutional neural networks.

Pitfalls of Graph Neural Network Evaluation

TLDR
This paper performs a thorough empirical evaluation of four prominent GNN models and suggests that simpler GNN architectures are able to outperform the more sophisticated ones if the hyperparameters and the training procedure are tuned fairly for all models.

Fair Representation Learning for Heterogeneous Information Networks

TLDR
This work proposes a comprehensive set of de-biasing methods for fair HINs representation learning, including samplingbased, projection-based, and graph neural networks (GNNs)based techniques and evaluates the performance of the proposed methods in an automated career counseling application to mitigate gender bias in career recommendation.

Say No to the Discrimination: Learning Fair Graph Neural Networks with Limited Sensitive Attribute Information

TLDR
The theoretical analysis shows that FairGNN can ensure the fairness of GNNs under mild conditions given limited nodes with known sensitive attributes, and extensive experiments on real-world datasets demonstrate the effectiveness of FairGnn in debiasing and keeping high accuracy.

The KL-Divergence between a Graph Model and its Fair I-Projection as a Fairness Regularizer

TLDR
This work proposes a generic approach applicable to most probabilistic graph modeling approaches that defines the class of fair graph models corresponding to a chosen set of fairness criteria and proposes a fairness regularizer defined as the KL-divergence between the graph model and its I-projection onto the set of fair models.

Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs

TLDR
This work identifies a set of key designs -- ego- and neighbor-embedding separation, higher-order neighborhoods, and combination of intermediate representations -- that boost learning from the graph structure under heterophily and combines them into a graph neural network, H2GCN, which is used as the base method to empirically evaluate the effectiveness of the identified designs.

Stability and Generalization of Graph Convolutional Neural Networks

TLDR
This paper is the first to study stability bounds on graph learning in a semi-supervised setting and derive generalization bounds for GCNN models and shows that the algorithmic stability of a GCNN model depends upon the largest absolute eigenvalue of its graph convolution filter.

Graph Neural Tangent Kernel: Fusing Graph Neural Networks with Graph Kernels

TLDR
A new class of graph kernels, Graph Neural Tangent Kernels (GNTKs), which correspond to infinitely wide multi-layer GNNs trained by gradient descent are presented, which enjoy the full expressive power ofGNNs and inherit advantages of GKs.
...