• Corpus ID: 237492193

Is Heterophily A Real Nightmare For Graph Neural Networks To Do Node Classification?

  title={Is Heterophily A Real Nightmare For Graph Neural Networks To Do Node Classification?},
  author={Sitao Luan and Chenqing Hua and Qincheng Lu and Jiaqi Zhu and Mingde Zhao and Shuyuan Zhang and Xiao-Wen Chang and Doina Precup},
Graph Neural Networks (GNNs) extend basic Neural Networks (NNs) by using the graph structures based on the relational inductive bias (homophily assumption). Though GNNs are believed to outperform NNs in real-world tasks, performance advantages of GNNs over graph-agnostic NNs seem not generally satisfactory. Heterophily has been considered as a main cause and numerous works have been put forward to address it. In this paper, we first show that not all cases of heterophily are harmful 1 for GNNs… 
Is Homophily a Necessity for Graph Neural Networks?
This work empirically finds that standard graph convolutional networks (GCNs) can actually achieve better performance than such carefully designed methods on some commonly used heterophilous graphs, and considers whether homophily is truly necessary for good GNN performance.


Simple Truncated SVD based Model for Node Classification on Heterophilic Graphs
This work proposes a simple alternative method that exploits Truncated Singular Value Decomposition (TSVD) of topological structure and node features and achieves up to ∼30% improvement in performance over state-of-theart methods on heterophilic graphs.
Predict then Propagate: Graph Neural Networks meet Personalized PageRank
This paper uses the relationship between graph convolutional networks (GCN) and PageRank to derive an improved propagation scheme based on personalized PageRank, and constructs a simple model, personalized propagation of neural predictions (PPNP), and its fast approximation, APPNP.
Representation Learning on Graphs with Jumping Knowledge Networks
This work explores an architecture -- jumping knowledge (JK) networks -- that flexibly leverages, for each node, different neighborhood ranges to enable better structure-aware representation in graphs.
Geom-GCN: Geometric Graph Convolutional Networks
The proposed aggregation scheme is permutation-invariant and consists of three modules, node embedding, structural neighborhood, and bi-level aggregation, and an implementation of the scheme in graph convolutional networks, termed Geom-GCN, to perform transductive learning on graphs.
Measuring and Improving the Use of Graph Information in Graph Neural Networks
A context-surrounding GNN framework is introduced and a new, improved GNN model, called CS-GNN, is devised to improve the use of graph information based on the smoothness values of a graph.
Revisiting Graph Neural Networks: All We Have is Low-Pass Filters
The results indicate that graph neural networks only perform low-pass filtering on feature vectors and do not have the non-linear manifold learning property, and some insights on GCN-based graph neural network design are proposed.
Graph Attention Networks
We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior
Beyond Low-frequency Information in Graph Convolutional Networks
An experimental investigation assessing the roles of low-frequency and high-frequency signals is presented, and a novel Frequency Adaptation Graph Convolutional Networks (FAGCN) with a selfgating mechanism is proposed, which can adaptively integrate different signals in the process of message passing.
Inductive Representation Learning on Large Graphs
GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.
Simple and Deep Graph Convolutional Networks
The GCNII is proposed, an extension of the vanilla GCN model with two simple yet effective techniques: {\em Initial residual} and {\em Identity mapping} that effectively relieves the problem of over-smoothing.