• Corpus ID: 170078827

Graph Normalizing Flows

@inproceedings{Liu2019GraphNF,
  title={Graph Normalizing Flows},
  author={Jenny Liu and Aviral Kumar and Jimmy Ba and Jamie Ryan Kiros and Kevin Swersky},
  booktitle={NeurIPS},
  year={2019}
}
We introduce graph normalizing flows: a new, reversible graph neural network model for prediction and generation. [] Key Method In the unsupervised case, we combine graph normalizing flows with a novel graph auto-encoder to create a generative model of graph structures. Our model is permutation-invariant, generating entire graphs with a single feed-forward pass, and achieves competitive results with the state-of-the art auto-regressive models, while being better suited to parallel computing architectures.

Figures and Tables from this paper

Auto-decoding Graphs

TLDR
The presented model outperforms the state of the art by a factor of 1.5 in mean accuracy and average rank across at least three different graph statistics, with a 2x speedup during inference.

Graph Embedding VAE: A Permutation Invariant Model of Graph Structure

TLDR
This work presents a permutation invariant latent-variable generative model relying on graph embeddings to encode structure, which is highly scalable to large graphs with likelihood evaluation and generation in O(|V | + |E|)$.

Permutation Invariant Graph Generation via Score-Based Generative Modeling

TLDR
A permutation invariant approach to modeling graphs, using the recent framework of score-based generative modeling, which design a permutation equivariant, multi-channel graph neural network to model the gradient of the data distribution at the input graph.

FlowGEN: A Generative Model for Flow Graphs

TLDR
FlowGEN is introduced, an implicit generative model for flow graphs that learns how to jointly generate graph topologies and flows with diverse dynamics directly from data using a novel (flow) graph neural network.

Score-based Generative Modeling of Graphs via the System of Stochastic Differential Equations

TLDR
A novel score-based generative model for graphs with a continuous-time framework is proposed that is able to generate molecules that lie close to the training distribution yet do not violate the chemical valency rule, demonstrating the effectiveness of the system of SDEs in modeling the node-edge relationships.

Training Graph Neural Networks with 1000 Layers

TLDR
It is found that reversible connections in combination with deep network architectures enable the training of overparameterized GNNs that outperform existing methods on multiple datasets and is the deepest GNN in the literature by one order of magnitude.

Permutation-Invariant Variational Autoencoder for Graph-Level Representation Learning

TLDR
This work proposes a permutation-invariant variational autoencoder for graph structured data that indirectly learns to match the node order of input and output graph, without imposing a particular node order or performing expensive graph matching.

Transfer Learning of Graph Neural Networks with Ego-graph Information Maximization

TLDR
This work proposes a novel view towards the essential graph information and advocate the capturing of it as the goal of transferable GNN training, which motivates the design of The authors', a novel GNN framework based on ego-graph information maximization to analytically achieve this goal.

Neo-GNNs: Neighborhood Overlap-aware Graph Neural Networks for Link Prediction

TLDR
This work proposes Neighborhood O verlap-aware Graph Neural Networks (Neo-GNNs) that learn useful structural features from an adjacency matrix and estimate overlapped Neighborhood overlap-based heuristic methods and handle overlapped multi-hop neighborhoods for link prediction.
...

References

SHOWING 1-10 OF 29 REFERENCES

Variational Graph Auto-Encoders

TLDR
The variational graph auto-encoder (VGAE) is introduced, a framework for unsupervised learning on graph-structured data based on the variational auto- Encoder (VAE) that can naturally incorporate node features, which significantly improves predictive performance on a number of benchmark datasets.

Gated Graph Sequence Neural Networks

TLDR
This work studies feature learning techniques for graph-structured inputs and achieves state-of-the-art performance on a problem from program verification, in which subgraphs need to be matched to abstract data structures.

GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Models

TLDR
The experiments show that GraphRNN significantly outperforms all baselines, learning to generate diverse graphs that match the structural characteristics of a target set, while also scaling to graphs 50 times larger than previous deep models.

Semi-Supervised Classification with Graph Convolutional Networks

TLDR
A scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs which outperforms related methods by a significant margin.

A new model for learning in graph domains

TLDR
A new neural model, called graph neural network (GNN), capable of directly processing graphs, which extends recursive neural networks and can be applied on most of the practically useful kinds of graphs, including directed, undirected, labelled and cyclic graphs.

Inductive Representation Learning on Large Graphs

TLDR
GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.

Revisiting Semi-Supervised Learning with Graph Embeddings

TLDR
On a large and diverse set of benchmark tasks, including text classification, distantly supervised entity extraction, and entity classification, the proposed semi-supervised learning framework shows improved performance over many of the existing models.

The Graph Neural Network Model

TLDR
A new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains, and implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space.

Attention is All you Need

TLDR
A new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely is proposed, which generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.

Generative Moment Matching Networks

TLDR
This work forms a method that generates an independent sample via a single feedforward pass through a multilayer perceptron, as in the recently proposed generative adversarial networks, using MMD to learn to generate codes that can then be decoded to produce samples.