• Corpus ID: 221517140

Distance Encoding - Design Provably More Powerful Graph Neural Networks for Structural Representation Learning

@article{Li2020DistanceE,
  title={Distance Encoding - Design Provably More Powerful Graph Neural Networks for Structural Representation Learning},
  author={Pan Li and Yanbang Wang and Hongwei Wang and Jure Leskovec},
  journal={ArXiv},
  year={2020},
  volume={abs/2009.00142}
}
Learning structural representations of node sets from graph-structured data is crucial for applications ranging from node-role discovery to link prediction and molecule classification. Graph Neural Networks (GNNs) have achieved great success in structural representation learning. However, most GNNs are limited by the 1-Weisfeiler-Lehman (WL) test and thus possible to generate identical representation for structures and graphs that are actually different. More powerful GNNs, proposed recently by… 

Figures and Tables from this paper

Heterogeneous Graph Neural Network with Distance Encoding

A novel distance encoding based heterogeneous graph neural network (called DHN) is proposed, which is able to learn more expressive heterogeneous graphs representations for downstream tasks and relies only on the graph structure and ensures the inductive ability of HGNN.

Simplifying Node Classification on Heterophilous Graphs with Compatible Label Propagation

This paper carefully design a combination of a base predictor with LP algorithm that enjoys a closed-form solution as well as convergence guarantees, and shows that this approach achieves the leading performance on graphs with various levels of homophily.

Hierarchical Message-Passing Graph Neural Networks

Empirical experiments exhibit that HC-GNN can outperform state-of-the-art GNN models in network analysis tasks, including node classification, link prediction, and community detection, and the model analysis further demonstrates HC- GNN’s robustness facing graph sparsity and the flexibility in incorporating different GNN encoders.

Distance-Enhanced Graph Neural Network for Link Prediction

An anchorbased distance is proposed that brings significant improvement for link prediction with few additional parameters and achieved state-of-theart result on the drug-drug-interaction and protein-protein-association tasks of OGB.

Unsupervised Heterophilous Network Embedding via r-Ego Network Discrimination

The first empirical study on the impact of homophily ratio on the performance of existing unsupervised NE methods and reveals their limitations is introduced and a SELf-supErvised Network Embedding (Selene) framework is developed for learning useful node representations for both homophilous and heterophILous networks.

MRAInf: Multilayer Relation Attention based Social Influence Prediction Net with Local Stimulation

An enhanced node representation (ENR) is designed to describe the original node structure vector by three-layer-neighbor adjacency relations, with more details for attention in GAT, and the approach outperforms existing comparison methods in terms of classification performance and predictive accuracy.

References

SHOWING 1-10 OF 72 REFERENCES

How Powerful are Graph Neural Networks?

This work characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures, and develops a simple architecture that is provably the most expressive among the class of GNNs.

Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks

It is shown that GNNs have the same expressiveness as the Weisfeiler-Leman graph isomorphism heuristic in terms of distinguishing non-isomorphic (sub-)graphs, and a generalization of GNN's is proposed, so-called $k$-dimensional GNNS ($k-GNNs), which can take higher-order graph structures at multiple scales into account.

Position-aware Graph Neural Networks

Position-aware Graph Neural Networks (P-GNNs) are proposed, a new class of GNNs for computing position-aware node embeddings that are inductive, scalable, and can incorporate node feature information.

Hierarchical Graph Representation Learning with Differentiable Pooling

DiffPool is proposed, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion.

Can graph neural networks count substructures?

A local relational pooling approach with inspirations from Murphy et al. (2019) is proposed and demonstrated that it is not only effective for substructure counting but also able to achieve competitive performance on real-world tasks.

Representation Learning on Graphs: Methods and Applications

A conceptual review of key advancements in this area of representation learning on graphs, including matrix factorization-based methods, random-walk based algorithms, and graph neural networks are provided.

Provably Powerful Graph Networks

This paper proposes a simple model that interleaves applications of standard Multilayer-Perceptron (MLP) applied to the feature dimension and matrix multiplication and shows that a reduced 2-order network containing just scaled identity operator, augmented with a single quadratic operation (matrix multiplication) has a provable 3-WL expressive power.

An End-to-End Deep Learning Architecture for Graph Classification

This paper designs a localized graph convolution model and shows its connection with two graph kernels, and designs a novel SortPooling layer which sorts graph vertices in a consistent order so that traditional neural networks can be trained on the graphs.

node2vec: Scalable Feature Learning for Networks

In node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks, a flexible notion of a node's network neighborhood is defined and a biased random walk procedure is designed, which efficiently explores diverse neighborhoods.

Joint Adaptive Feature Smoothing and Topology Extraction via Generalized PageRank GNNs

GPR-GNN is the first known architecture that not only provably mitigates feature over-smoothing but also adaptively learns the weights of the GPR model to optimize topological information extraction.
...