LiftPool: Lifting-based Graph Pooling for Hierarchical Graph Representation Learning

  title={LiftPool: Lifting-based Graph Pooling for Hierarchical Graph Representation Learning},
  author={Mingxing Xu and Wenrui Dai and Chenglin Li and Junni Zou and Hongkai Xiong},
Graph pooling has been increasingly considered for graph neural networks (GNNs) to facilitate hierarchical graph representation learning. Existing graph pooling methods commonly consist of two stages, i.e. , selecting the top-ranked nodes and removing the rest nodes to construct a coarsened graph representation. However, local structural information of the removed nodes would be inevitably dropped in these methods, due to the inherent coupling of nodes (location) and their features (signals… 
1 Citations

Figures and Tables from this paper

Discriminative Graph Representation Learning with Distributed Sampling

A novel node sampling strategy is developed that is equivalent to performing the difficult step of down-pooling operation on non-grid graph data and interpretability studies illustrate the ability of the model to extract discriminative substructures.



ASAP: Adaptive Structure Aware Pooling for Learning Hierarchical Graph Representations

This work proposes ASAP (Adaptive Structure Aware Pooling), a sparse and differentiable pooling method that addresses the limitations of previous graph pooling architectures and shows that combining existing GNN architectures with ASAP leads to state-of-the-art results on multiple graph classification benchmarks.

Hierarchical Graph Representation Learning with Differentiable Pooling

DiffPool is proposed, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion.

How Powerful are Graph Neural Networks?

This work characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures, and develops a simple architecture that is provably the most expressive among the class of GNNs.

Self-Attention Graph Pooling

This paper proposes a graph pooling method based on self-attention using graph convolution, which achieves superior graph classification performance on the benchmark datasets using a reasonable number of parameters.

Graph Attention Networks

We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior

An End-to-End Deep Learning Architecture for Graph Classification

This paper designs a localized graph convolution model and shows its connection with two graph kernels, and designs a novel SortPooling layer which sorts graph vertices in a consistent order so that traditional neural networks can be trained on the graphs.

Inductive Representation Learning on Large Graphs

GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.

Graph Neural Networks With Convolutional ARMA Filters

A novel graph convolutional layer inspired by the auto-regressive moving average (ARMA) filter is proposed that provides a more flexible frequency response, is more robust to noise, and better captures the global graph structure.

CayleyNets: Graph Convolutional Neural Networks With Complex Rational Spectral Filters

A new spectral domain convolutional architecture for deep learning on graphs with a new class of parametric rational complex functions (Cayley polynomials) allowing to efficiently compute spectral filters on graphs that specialize on frequency bands of interest.

Deep Convolutional Networks on Graph-Structured Data

This paper develops an extension of Spectral Networks which incorporates a Graph Estimation procedure, that is test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate.