• Corpus ID: 239015811

Beltrami Flow and Neural Diffusion on Graphs

@inproceedings{Chamberlain2021BeltramiFA,
  title={Beltrami Flow and Neural Diffusion on Graphs},
  author={Benjamin Paul Chamberlain and James R. Rowbottom and Davide Eynard and Francesco Di Giovanni and Xiaowen Dong and Michael M. Bronstein},
  booktitle={NeurIPS},
  year={2021}
}
We propose a novel class of graph neural networks based on the discretised Beltrami flow, a non-Euclidean diffusion PDE. In our model, node features are supplemented with positional encodings derived from the graph topology and jointly evolved by the Beltrami flow, producing simultaneously continuous feature learning and topology evolution. The resulting model generalises many popular graph neural networks and achieves state-of-the-art results on several benchmarks. 

Figures and Tables from this paper

Neural Sheaf Diffusion: A Topological Perspective on Heterophily and Oversmoothing in GNNs
Cellular sheaves equip graphs with “geometrical” structure by assigning vector spaces and linear maps to nodes and edges. Graph Neural Networks (GNNs) implicitly assume a graph with a trivial
Heterogeneous manifolds for curvature-aware graph embedding
TLDR
By adding a single extra radial dimension to any given existing homogeneous model, this paper can both account for heterogeneous curvature distributions on graphs and pairwise distances and show its potential in better preservation of high-order structures and heterogeneous random graphs generation.
Graph-Coupled Oscillator Networks
TLDR
The oversmoothing problem, commonly encountered in GNNs, is related to the stability of steady states of the underlying ODE and it is shown that zero-Dirichlet energy steady states are not stable for the authors' proposed ODEs, demonstrating that the proposed framework mitigates the oversmooting problem.

References

SHOWING 1-10 OF 104 REFERENCES
GRAND: Graph Neural Diffusion
We present Graph Neural Diffusion (GRAND) that approaches deep learning on graphs as a continuous diffusion process and treats Graph Neural Networks (GNNs) as discretisations of an underlying PDE. In
Hyperbolic Graph Neural Networks
TLDR
A novel GNN architecture for learning representations on Riemannian manifolds with differentiable exponential and logarithmic maps is proposed and a scalable algorithm for modeling the structural properties of graphs is developed, comparing Euclidean and hyperbolic geometry.
Revisiting Graph Neural Networks: All We Have is Low-Pass Filters
TLDR
The results indicate that graph neural networks only perform low-pass filtering on feature vectors and do not have the non-linear manifold learning property, and some insights on GCN-based graph neural network design are proposed.
Diffusion Improves Graph Learning
TLDR
This work removes the restriction of using only the direct neighbors by introducing a powerful, yet spatially localized graph convolution: Graph diffusion convolution (GDC), which leverages generalized graph diffusion and alleviates the problem of noisy and often arbitrarily defined edges in real graphs.
Learning Discrete Structures for Graph Neural Networks
TLDR
This work proposes to jointly learn the graph structure and the parameters of graph convolutional networks (GCNs) by approximately solving a bilevel program that learns a discrete probability distribution on the edges of the graph.
Semi-Supervised Classification with Graph Convolutional Networks
TLDR
A scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs which outperforms related methods by a significant margin.
Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering
TLDR
This work presents a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs.
Discrete and Continuous Deep Residual Learning Over Graphs
TLDR
This paper shows how both discrete and continuous residual layers allow for more robust training, and applies and analyse the behaviour of these techniques and gives pointers to how this technique can be useful in other domains by allowing more predictable behaviour under dynamic times of computation.
CayleyNets: Graph Convolutional Neural Networks With Complex Rational Spectral Filters
TLDR
A new spectral domain convolutional architecture for deep learning on graphs with a new class of parametric rational complex functions (Cayley polynomials) allowing to efficiently compute spectral filters on graphs that specialize on frequency bands of interest.
LanczosNet: Multi-Scale Deep Graph Convolutional Networks
TLDR
The Lanczos network (LanczosNet) is proposed, which uses the Lanczos algorithm to construct low rank approximations of the graph Laplacian for graph convolution and facilitates both graph kernel learning as well as learning node embeddings.
...
1
2
3
4
5
...