• Corpus ID: 131777802

PAN: Path Integral Based Convolution for Deep Graph Neural Networks

@article{Ma2019PANPI,
  title={PAN: Path Integral Based Convolution for Deep Graph Neural Networks},
  author={Zheng Ma and Ming Li and Yuguang Wang},
  journal={ArXiv},
  year={2019},
  volume={abs/1904.10996}
}
Convolution operations designed for graph-structured data usually utilize the graph Laplacian, which can be seen as message passing between the adjacent neighbors through a generic random walk. In this paper, we propose PAN, a new graph convolution framework that involves every path linking the message sender and receiver with learnable weights depending on the path length, which corresponds to the maximal entropy random walk. PAN generalizes the graph Laplacian to a new transition matrix we… 

Figures and Tables from this paper

Path integral based convolution and pooling for graph neural networks

This work proposes path integral-based GNNs (PAN), a versatile framework that can be tailored for different graph data with varying sizes and structures, and achieves state-of-the-art performance on various graph classification/regression tasks.

Graph Neural Networks with Haar Transform-Based Convolution and Pooling: A Complete Guide

This work proposes a novel graph neural network, which it calls HaarNet, to predict graph labels with interrelated convolution and pooling strategies, which outperforms various existing GNN models, especially on big data sets.

Fast Haar Transforms for Graph Neural Networks

Diffusion Improves Graph Learning

This work removes the restriction of using only the direct neighbors by introducing a powerful, yet spatially localized graph convolution: Graph diffusion convolution (GDC), which leverages generalized graph diffusion and alleviates the problem of noisy and often arbitrarily defined edges in real graphs.

Haar Graph Pooling

A new graph pooling operation based on compressive Haar transforms -- HaarPooling is proposed, which synthesizes the features of any given input graph into a feature vector of uniform size.

HaarPooling: Graph Pooling with Compressive Haar Basis

A new graph pooling operation based on compressive Haar transforms, called HaarPooling, is proposed, which achieves state-of-the-art performance on diverse graph classification problems.

Graph convolutional networks with higher‐order pooling for semisupervised node classification

A novel GCN based on a novel higher‐order pooling layer for semisupervised node classification on graph‐structure data is proposed and experimental results show that the proposed model and its variants have lower computational complexity and achieve the state‐of‐the‐art in the node classification accuracy.

Class-Attentive Diffusion Network for Semi-Supervised Classification

Adaptive aggregation with Class-Attentive Diffusion (AdaCAD) is proposed, a new aggregation scheme that adaptively aggregates nodes probably of the same class among K-hop neighbors and significantly outperforms the state-of-the-art methods.

Neural Message Passing on High Order Paths

This work generalizes graph neural nets to pass messages and aggregate across higher order paths, which allows for information to propagate over various levels and substructures of the graph.

References

SHOWING 1-10 OF 40 REFERENCES

Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering

This work presents a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs.

Dynamic Filters in Graph Convolutional Networks

This work proposes a novel graph-convolutional network architecture that builds on a generic formulation that relaxes the 1-to-1 correspondence between filter weights and data elements around the center of the convolution.

LanczosNet: Multi-Scale Deep Graph Convolutional Networks

The Lanczos network (LanczosNet) is proposed, which uses the Lanczos algorithm to construct low rank approximations of the graph Laplacian for graph convolution and facilitates both graph kernel learning as well as learning node embeddings.

Diffusion-Convolutional Neural Networks

Through the introduction of a diffusion-convolution operation, it is shown how diffusion-based representations can be learned from graph-structured data and used as an effective basis for node classification.

Attention-based Graph Neural Network for Semi-supervised Learning

A novel graph neural network is proposed that removes all the intermediate fully-connected layers, and replaces the propagation layers with attention mechanisms that respect the structure of the graph, and demonstrates that this approach outperforms competing methods on benchmark citation networks datasets.

Rethinking Knowledge Graph Propagation for Zero-Shot Learning

This work proposes a Dense Graph Propagation module with carefully designed direct links among distant nodes to exploit the hierarchical graph structure of the knowledge graph through additional connections and outperforms state-of-the-art zero-shot learning approaches.

Gated Graph Sequence Neural Networks

This work studies feature learning techniques for graph-structured inputs and achieves state-of-the-art performance on a problem from program verification, in which subgraphs need to be matched to abstract data structures.

Graph Attention Networks

We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior

Spectral Networks and Locally Connected Networks on Graphs

This paper considers possible generalizations of CNNs to signals defined on more general domains without the action of a translation group, and proposes two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian.

Simplifying Graph Convolutional Networks

This paper successively removes nonlinearities and collapsing weight matrices between consecutive layers, and theoretically analyze the resulting linear model and show that it corresponds to a fixed low-pass filter followed by a linear classifier.