• Corpus ID: 236912752

PDE-GCN: Novel Architectures for Graph Neural Networks Motivated by Partial Differential Equations

@inproceedings{Eliasof2021PDEGCNNA,
  title={PDE-GCN: Novel Architectures for Graph Neural Networks Motivated by Partial Differential Equations},
  author={Moshe Eliasof and Eldad Haber and Eran Treister},
  booktitle={NeurIPS},
  year={2021}
}
Graph neural networks are increasingly becoming the go-to approach in various fields such as computer vision, computational biology and chemistry, where data are naturally explained by graphs. However, unlike traditional convolutional neural networks, deep graph networks do not necessarily yield better performance than shallow graph networks. This behavior usually stems from the over-smoothing phenomenon. In this work, we propose a family of architectures to control this behavior by design. Our… 

Quantized convolutional neural networks through the lens of partial differential equations

TLDR
It is demonstrated through several experiments that the property of forward sta- bility preserves the action of a network under different quantization rates, and that at times, stability even aids in improving accuracy.

Graph Kernel Neural Networks

TLDR
This paper proposes to use graph kernels, i.e., kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain, and proposes an entirely structural model that does not require computing the embedding of the input graph.

Graph-Coupled Oscillator Networks

TLDR
It is proved that GraphCON mitigates the exploding and vanishing gradients problem to facilitate training of deep multi-layer GNNs and offers competitive performance with respect to the state-of-the-art on a variety of graph-based learning tasks.

pathGCN: Learning General Graph Spatial Operators from Paths

TLDR
By properly learning both the spatial and point-wise convolutions, phenomena like over-smoothing can be inherently avoided, and new state-of-the-art performance is achieved.

Graph Neural Networks as Gradient Flows

TLDR
This approach allows to analyse the GNN evolution from a multi-particle perspective as learning attractive and repulsive forces in feature space via the positive and negative eigenvalues of a symmetric ‘channel-mixing’ matrix.

Predicting parametric spatiotemporal dynamics by multi-resolution PDE structure-preserved deep learning

TLDR
A novel PDE-preserved neural network (PPNN) for rapidly predicting parametric spatiotemporal dynamics, given the governing PDEs are (partially) known, and this physics-inspired learning architecture design endows PPNN with excellent generalizability and long-term prediction accuracy compared to the state-of-the-art blackbox ConvResNet baseline.

Optimization-Induced Graph Implicit Nonlinear Diffusion

TLDR
It is shown that the learned representation can be formalized as the minimizer of an explicit convex optimization objective, and can embed prior properties to the equilibrium, as well as introducing skip connections to promote training stability.

Convolutional Neural Networks on Graphs with Chebyshev Approximation, Revisited

TLDR
CebNetII is proposed, a new GNN model based on Chebyshev interpolation, which enhances the original Chebyshv polynomial approximation while reducing the Runge phenomenon, and is scaled to a billion graph papers100M, showing that spectral-based GNNs have superior performance.

ACMP: Allen-Cahn Message Passing for Graph Neural Networks with Particle Phase Transition

TLDR
Experiments for various real node classification datasets show the GNNs with ACMP can achieve state of the art performance with no decay of Dirichlet energy, and provides a deep model ofGNNs which circumvents the common GNN problem of oversmoothing.

GRAND++: G RAPH N EURAL D IFFUSION WITH A S OURCE T ERM

TLDR
The proposed GRAph Neural Diffusion with a source term (GRAND++) can provide accurate classification even when the model is trained with a very limited number of labeled training data, showing a significant improvement over many existing graph neural networks.

References

SHOWING 1-10 OF 63 REFERENCES

Beyond Finite Layer Neural Networks: Bridging Deep Architectures and Numerical Differential Equations

TLDR
It is shown that many effective networks, such as ResNet, PolyNet, FractalNet and RevNet, can be interpreted as different numerical discretizations of differential equations and established a connection between stochastic control and noise injection in the training process which helps to improve generalization of the networks.

GRAND: Graph Neural Diffusion

We present Graph Neural Diffusion (GRAND) that approaches deep learning on graphs as a continuous diffusion process and treats Graph Neural Networks (GNNs) as discretisations of an underlying PDE. In

Simple and Deep Graph Convolutional Networks

TLDR
The GCNII is proposed, an extension of the vanilla GCN model with two simple yet effective techniques: {\em Initial residual} and {\em Identity mapping} that effectively relieves the problem of over-smoothing.

Deep Neural Networks Motivated by Partial Differential Equations

TLDR
A new PDE interpretation of a class of deep convolutional neural networks (CNN) that are commonly used to learn from speech, image, and video data is established and three new ResNet architectures are derived that fall into two new classes: parabolic and hyperbolic CNNs.

PDE-Net 2.0: Learning PDEs from Data with A Numeric-Symbolic Hybrid Deep Network

Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs

TLDR
This work generalizes the convolution operator from regular grids to arbitrary graphs while avoiding the spectral domain, which allows us to handle graphs of varying size and connectivity.

DiffGCN: Graph Convolutional Networks via Differential Operators and Algebraic Multigrid Pooling

TLDR
This work proposed novel approaches for graph convolution, pooling and unpooling, taking inspiration from finite-elements and algebraic multigrid frameworks, form a parameterized convolution kernel based on discretized differential operators, leveraging the graph mass, gradient and Laplacian.

Stable Architectures for Deep Neural Networks

TLDR
This paper relates the exploding and vanishing gradient phenomenon to the stability of the discrete ODE and presents several strategies for stabilizing deep learning for very deep networks.

Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering

TLDR
This work presents a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs.

Geometric Deep Learning on Graphs and Manifolds Using Mixture Model CNNs

TLDR
This paper proposes a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features and test the proposed method on standard tasks from the realms of image-, graph-and 3D shape analysis and show that it consistently outperforms previous approaches.
...