# From Stars to Subgraphs: Uplifting Any GNN with Local Structure Awareness

@article{Zhao2021FromST, title={From Stars to Subgraphs: Uplifting Any GNN with Local Structure Awareness}, author={Lingxiao Zhao and Wei Jin and Leman Akoglu and Neil Shah}, journal={ArXiv}, year={2021}, volume={abs/2110.03753} }

Message Passing Neural Networks (MPNNs) are a common type of Graph Neural Network (GNN), in which each node’s representation is computed recursively by aggregating representations (“messages”) from its immediate neighbors akin to a star-shaped pattern. MPNNs are appealing for being efficient and scalable, however their expressiveness is upper-bounded by the 1st-order Weisfeiler-Leman isomorphism test (1-WL). In response, prior works propose highly expressive models at the cost of scalability…

## 25 Citations

### Ordered Subgraph Aggregation Networks

- Computer ScienceArXiv
- 2022

It is shown that increasing subgraph size always increases the expressive power and a better understanding of their limitations is developed by relating them to the established k - WL hierarchy.

### Your Neighbors Are Communicating: Towards Powerful and Scalable Graph Neural Networks

- Computer ScienceArXiv
- 2022

This work proposes a general and provably powerful GNN framework that preserves the scalability of message passing scheme, and proposes the NC-GNN framework as a differentiable neural version of NC-1-WL.

### A Practical, Progressively-Expressive GNN

- Computer ScienceArXiv
- 2022

This work proposes the ( k, c )( ≤ ) -S ET WL hierarchy with greatly reduced complexity from k -WL, achieved by moving from k-tuples of nodes to sets with ≤ k nodes defined over ≤ c connected components in the induced original graph.

### Beyond 1-WL with Local Ego-Network Encodings

- Computer ScienceArXiv
- 2022

I GEL is introduced, a preprocessing step to produce features that augment node representations by encoding ego-networks into sparse vectors that enrich Message Passing (MP) Graph Neural Networks (GNNs) beyond 1-WL expressivity.

### A Generalization of ViT/MLP-Mixer to Graphs

- Computer ScienceArXiv
- 2022

A new class of GNNs, called Graph MLP-Mixer, is introduced that holds three key properties: they capture long-range dependency and mitigate the issue of over-squashing, they offer better speed and memory with a complexity linear to the number of nodes and edges, and they show high expressivity in terms of graph isomorphism.

### SpeqNets: Sparsity-aware Permutation-equivariant Graph Networks

- Computer ScienceICML
- 2022

This work devise a class of universal, permutation-equivariant graph networks, which, unlike previous architectures, offer ane-grained control between expressivity and scalability and adapt to the sparsity of the graph.

### Going Deeper into Permutation-Sensitive Graph Neural Networks

- Computer ScienceICML
- 2022

This work devise an efficient permutation-sensitive aggregation mechanism via permutation groups, capturing pairwise correlations between neighboring nodes, and proves that this approach is strictly more powerful than the 2-dimensional Weisfeiler-Lehman (2-WL) graph isomorphism test and not lesspowerful than the 3-Wl test.

### Geodesic Graph Neural Network for Efficient Graph Representation Learning

- Computer ScienceArXiv
- 2022

It is theoretically proved that GDGNN is more powerful than plain GNNs, and experimental results are presented to show thatGDGNN achieves highly competitive performance with state-of-the-art GNN models on link prediction and graph classiﬁcation tasks while taking less time.

### Subgraph Permutation Equivariant Networks

- Computer Science
- 2021

In this work we develop a new method, named Sub-graph Permutation Equivariant Networks (SPEN), which provides a framework for building graph neural networks that operate on sub-graphs, while using…

### Graph Condensation for Graph Neural Networks

- Computer ScienceICLR
- 2022

The problem of graph condensation for graph neural networks (GNNs) is proposed and study, aiming to condense the large, original graph into a small, synthetic and highly-informative graph, such that GNNs trained on the small graph and large graph have comparable performance.

## References

SHOWING 1-10 OF 68 REFERENCES

### Equivariant Subgraph Aggregation Networks

- Computer ScienceICLR
- 2022

A novel framework to represent each graph as a set of subgraphs derived by some predefined policy, and to process it using a suitable equivariant architecture, and it is proved that this approach increases the expressive power of both MPNNs and more expressive architectures.

### Identity-aware Graph Neural Networks

- Computer ScienceAAAI
- 2021

This work develops a class of message passing GNNs, named Identity-aware Graph Neural Networks (ID-GNNs), with greater expressive power than the 1-WL test, and proposes a simplified but faster version of ID- GNN that injects node identity information as augmented node features.

### Graph Neural Networks with Local Graph Parameters

- Computer ScienceNeurIPS
- 2021

This work describes local graph parameter enabled GNN s as a framework for studying “higher-order” GNNs and their higher-order counterparts, and precisely characterize their distinguishing power, in terms of a variant of the WL test, and of the graph structural properties that they can take into account.

### The Surprising Power of Graph Neural Networks with Random Node Initialization

- Computer ScienceIJCAI
- 2021

This paper proves that GNNs with RNI are universal, a first such result for GNN's not relying on computationally demanding higher-order properties, and empirically analyzes the effect of RNI on GNN’s, finding that the empirical findings support the superior performance of GNNS with R NI over standard Gnns.

### Counting Substructures with Higher-Order Graph Neural Networks: Possibility and Impossibility Results

- Computer ScienceArXiv
- 2020

This work introduces and analyze a new recursive pooling technique of local neighborhoods that allows different tradeoffs of computational cost and expressive power, and proves that the proposed algorithm can greatly reduce computational complexity compared to the existing higher-order $k$-GNN and Local Relational Pooling (LRP) networks.

### Ego-GNNs: Exploiting Ego Structures in Graph Neural Networks

- Computer ScienceICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2021

This work proposes to augment the GNN message-passing operations with information de-fined on ego graphs (i.e., the induced subgraph surrounding each node) and shows that Ego-GNNs are provably more powerful than standard message-Passing GNNs.

### Building powerful and equivariant graph neural networks with structural message-passing

- Computer ScienceNeurIPS
- 2020

This work propagates unique node identifiers in the form of a one-hot encoding in order to learn a local context matrix around each node, which enables to learn rich local information about both features and topology, which can be pooled to obtain node representations.

### Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks

- Computer ScienceAAAI
- 2019

It is shown that GNNs have the same expressiveness as the Weisfeiler-Leman graph isomorphism heuristic in terms of distinguishing non-isomorphic (sub-)graphs, and a generalization of GNN's is proposed, so-called $k$-dimensional GNNS ($k-GNNs), which can take higher-order graph structures at multiple scales into account.

### Nested Graph Neural Networks

- Computer ScienceNeurIPS
- 2021

NGNN is a plug-and-play framework that can be combined with various base GNNs and is proved that NGNN can discriminate almost all r -regular graphs, where 1-WL always fails.