• Corpus ID: 182952496

Redundancy-Free Computation Graphs for Graph Neural Networks

@article{Jia2019RedundancyFreeCG,
  title={Redundancy-Free Computation Graphs for Graph Neural Networks},
  author={Zhihao Jia and Sina Lin and Rex Ying and Jiaxuan You and Jure Leskovec and Alexander Aiken},
  journal={ArXiv},
  year={2019},
  volume={abs/1906.03707}
}
Graph Neural Networks (GNNs) are based on repeated aggregations of information across nodes' neighbors in a graph. However, because common neighbors are shared between different nodes, this leads to repeated and inefficient computations. We propose Hierarchically Aggregated computation Graphs (HAGs), a new GNN graph representation that explicitly avoids redundancy by managing intermediate aggregation results hierarchically, eliminating repeated computations and unnecessary data transfers in GNN… 

Figures and Tables from this paper

Benchmarking Graph Neural Networks
TLDR
A reproducible GNN benchmarking framework is introduced, with the facility for researchers to add new models conveniently for arbitrary datasets, and a principled investigation into the recent Weisfeiler-Lehman GNNs (WL-GNNs) compared to message passing-based graph convolutional networks (GCNs).
Semi-Supervised Graph Neural Network with Probabilistic Modeling to Mitigate Uncertainty
TLDR
PGNN learns a distribution over network weights and encodings, thus solving the epistemic and aleatoric uncertainty inherited in network parameters and model predictions, and generates an ensemble of models in one iteration, computing an estimate of credible intervals over the predictions.
Understanding the Design-Space of Sparse/Dense Multiphase GNN dataflows on Spatial Accelerators
TLDR
This work proposes a taxonomy to describe all possible choices for mapping the dense and sparse phases of GNN inference, spatially and temporally over a spatial accelerator, capturing both the intra-phase dataflow and the inter-phase (pipelined) dataflow.
On Greedy Approaches to Hierarchical Aggregation
TLDR
This work analyzes greedy algorithms for the Hierarchical Aggregation (HAG) problem, a strategy introduced in [Jia et. al., KDD 2020] for speeding up learning on Graph Neural Networks, and proves that this greedy algorithm does satisfy some (weaker) approximation guarantee.
Understanding the Design Space of Sparse/Dense Multiphase Dataflows for Mapping Graph Neural Networks on Spatial Accelerators
TLDR
A taxonomy is proposed to describe all possible choices for mapping the dense and sparse phases of GNNs spatially and temporally over a spatial accelerator, capturing both the intra-phase dataflow and the inter-phase (pipelined) dataflow.

References

SHOWING 1-10 OF 20 REFERENCES
Hierarchical Graph Representation Learning with Differentiable Pooling
TLDR
DiffPool is proposed, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion.
Inductive Representation Learning on Large Graphs
TLDR
GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.
Simplifying Graph Convolutional Networks
TLDR
This paper successively removes nonlinearities and collapsing weight matrices between consecutive layers, and theoretically analyze the resulting linear model and show that it corresponds to a fixed low-pass filter followed by a linear classifier.
Tree decompositions and social graphs
TLDR
It is shown that TD methods can identify structures that correlate strongly with the core-periphery structure of realistic networks, even when using simple greedy heuristics; and it is proved that the only two impediments to low-distortion hyperbolic embedding are high tree-width and long geodesic cycles.
Semi-Supervised Classification with Graph Convolutional Networks
TLDR
A scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs which outperforms related methods by a significant margin.
FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling
TLDR
Enhanced with importance sampling, FastGCN not only is efficient for training but also generalizes well for inference, and is orders of magnitude more efficient while predictions remain comparably accurate.
Subgraph Matching Kernels for Attributed Graphs
TLDR
It is shown that subgraph matching kernels generalize several known kernels and is proposed a graph-theoretical algorithm inspired by a classical relation between common subgraphs of two graphs and cliques in their product graph observed by Levi (1973).
Deep Graph Kernels
TLDR
A unified framework to learn latent representations of sub-structures for graphs, inspired by latest advancements in language modeling and deep learning, which achieves significant improvements in classification accuracy over state-of-the-art graph kernels.
Learning both Weights and Connections for Efficient Neural Network
TLDR
A method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections, and prunes redundant connections using a three-step method.
Complexity of finding embeddings in a k -tree
TLDR
This work determines the complexity status of two problems related to finding the smallest number k such that a given graph is a partial k-tree and presents an algorithm with polynomially bounded (but exponential in k) worst case time complexity.
...
...