• Corpus ID: 235417126

Is Homophily a Necessity for Graph Neural Networks?

@article{Ma2022IsHA,
  title={Is Homophily a Necessity for Graph Neural Networks?},
  author={Yao Ma and Xiaorui Liu and Neil Shah and Jiliang Tang},
  journal={ArXiv},
  year={2022},
  volume={abs/2106.06134}
}
Graph neural networks (GNNs) have shown great prowess in learning representations suitable for numerous graph-based machine learning tasks. When applied to semi-supervised node classification, GNNs are widely believed to work well due to the homophily assumption (“like attracts like”), and fail to generalize to heterophilous graphs where dissimilar nodes connect. Recent works design new architectures to overcome such heterophily-related limitations, citing poor baseline performance and new… 

Figures and Tables from this paper

What Do Graph Convolutional Neural Networks Learn?

TLDR
Investigation on underlying graph structures of a dataset finds that a GCN’s SSNC performance is significantly influenced by the consistency and uniqueness in neighborhood structure of nodes within a class.

Graph Neural Networks for Graphs with Heterophily: A Survey

TLDR
A systematic taxonomy that essentially governs existing heterophilic GNN models is proposed, along with a general summary and detailed analysis, to facilitate robust and fair evaluations of these graph neural networks.

Characterizing Graph Datasets for Node Classification: Beyond Homophily-Heterophily Dichotomy

TLDR
It is shown that LI explains why standard message-passing graph neural networks (GNNs) do not perform well on non-homophilous graphs, and thus such datasets need special attention, and argued that adjusted homophily is a better alternative to thehomophily measures commonly used in the literature.

How does Heterophily Impact the Robustness of Graph Neural Networks?: Theoretical Connections and Practical Implications

TLDR
This work deduces that separate aggregators for ego- and neighbor-embeddings, a design principle which has been identified to significantly improve prediction for heterophilous graph data, can also offer increased robustness to GNNs, and shows thatGNNs merely adopting this design achieve improved empirical and certifiable robustness compared to the best-performing unvaccinated model.

On the Relationship between Heterophily and Robustness of Graph Neural Networks

TLDR
It is deduced that a design principle identified to significantly improve predictive performance under heterophily—separate aggregators for egoand neighbor-embeddings—can also inherently offer increased robustness to GNNs, and models with this design can be readily combined with explicit defense mechanisms to yield improved robustness.

Exploiting Neighbor Effect: Conv-Agnostic GNNs Framework for Graphs with Heterophily

TLDR
This work re-examine the heterophily problem of GNNs and investigates the feature aggregation of inter-class neighbors, and proposes a Conv-Agnostic GNNS framework (CAGNNs) to enhance the performance of Gnns on heterophilic datasets by learning the neighbor effect for each node.

GBK-GNN: Gated Bi-Kernel Graph Neural Networks for Modeling Both Homophily and Heterophily

TLDR
A novel GNN model based on a bi-kernel feature transformation and a selection gate is proposed, where two kernels capture homophily and heterophily information respectively, and the gate is introduced to select which kernel the authors should use for the given node pairs.

When Does A Spectral Graph Neural Network Fail in Node Classification?

TLDR
It is indicated that graph filters with low response efficiency on label difference are prone to fail, and a provably better strategy for filter design is provided from the theoretical analysis using datadriven filter banks.

Graph convolutional and attention models for entity classification in multilayer networks

TLDR
This work instantiate a GNN framework for representation learning and semi-supervised classification in multilayer networks with attributed entities, and arbitrary number of layers and intra-layer and inter-layer connections between nodes, and shows how these methods are able to take advantage of the presence of real attributes for the entities.

Simplifying approach to Node Classification in Graph Neural Networks

References

SHOWING 1-10 OF 47 REFERENCES

Is Heterophily A Real Nightmare For Graph Neural Networks To Do Node Classification?

TLDR
The Adaptive Channel Mixing (ACM) framework is proposed to adaptively exploit aggregation, diversification and identity channels in each GNN layer to address harmful heterophily and validate the ACM-augmented baselines with 10 realworld node classification tasks.

Graph Neural Networks with Heterophily

TLDR
The proposed framework incorporates an interpretable compatibility matrix for modeling the heterophily or homophily level in the graph, which can be learned in an end-to-end fashion, enabling it to go beyond the assumption of stronghomophily.

Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs

TLDR
This work identifies a set of key designs -- ego- and neighbor-embedding separation, higher-order neighborhoods, and combination of intermediate representations -- that boost learning from the graph structure under heterophily and combines them into a graph neural network, H2GCN, which is used as the base method to empirically evaluate the effectiveness of the identified designs.

Interpreting and Unifying Graph Neural Networks with An Optimization Framework

TLDR
A surprising connection is established between different propagation mechanisms with a unified optimization problem, showing that despite the proliferation of various GNNs, in fact, their proposed propagation mechanisms are the optimal solution optimizing a feature fitting function over a wide class of graph kernels with a graph regularization term.

How Powerful are Graph Neural Networks?

TLDR
This work characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures, and develops a simple architecture that is provably the most expressive among the class of GNNs.

Adaptive Universal Generalized PageRank Graph Neural Network

TLDR
This work introduces a new Generalized PageRank (GPR) GNN architecture that adaptively learns the GPR weights so as to jointly optimize node feature and topological information extraction, regardless of the extent to which the node labels are homophilic or heterophilic.

Investigating and Mitigating Degree-Related Biases in Graph Convoltuional Networks

TLDR
A novel Self-Supervised-Learning Degree-Specific GCN (SL-DSGCN) is developed that not only outperforms state-of-the-art self-training/self-supervised-learning GCN methods, but also improves GCN accuracy dramatically for low-degree nodes.

Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks

TLDR
It is shown that GNNs have the same expressiveness as the Weisfeiler-Leman graph isomorphism heuristic in terms of distinguishing non-isomorphic (sub-)graphs, and a generalization of GNN's is proposed, so-called $k$-dimensional GNNS ($k-GNNs), which can take higher-order graph structures at multiple scales into account.

Graph Neural Networks Exponentially Lose Expressive Power for Node Classification

TLDR
The theory enables us to relate the expressive power of GCNs with the topological information of the underlying graphs inherent in the graph spectra and provides a principled guideline for weight normalization of graph NNs.

Grale: Designing Networks for Graph Learning

TLDR
This work presents Grale, a scalable method developed to address the problem of graph design for graphs with billions of nodes, which operates by fusing together different measures of (potentially weak) similarity to create a graph which exhibits high task-specific homophily between its nodes.