• Corpus ID: 233307262

Permutation-Invariant Variational Autoencoder for Graph-Level Representation Learning

@inproceedings{Winter2021PermutationInvariantVA,
  title={Permutation-Invariant Variational Autoencoder for Graph-Level Representation Learning},
  author={Robin Winter and Frank No'e and Djork-Arn{\'e} Clevert},
  booktitle={NeurIPS},
  year={2021}
}
Recently, there has been great success in applying deep neural networks on graph structured data. Most work, however, focuses on either nodeor graph-level supervised learning, such as node, link or graph classification or node-level unsupervised learning (e.g., node clustering). Despite its wide range of possible applications, graph-level unsupervised representation learning has not received much attention yet. This might be mainly attributed to the high representation complexity of graphs… 

Figures and Tables from this paper

A Fully Differentiable Set Autoencoder

This work proposes a generic, robust and systematic model that is able to combine multiple data modalities in a permutation and modes-number-invariant fashion, both fundamental properties to properly face changes in data type content and availability.

Unsupervised Learning of Group Invariant and Equivariant Representations

This work proposes a general learning strategy based on an encoder-decoder framework in which the latent representation is separated in an invariant term and an equivariant group action component and presents a construction valid for any G, both discrete and continuous.

Generating stable molecules using imitation and reinforcement learning

This work learns basic chemical rules from imitation learning on the GDB-11 database to create an initial model applicable for all stoichiometries, and applies the model to larger molecules to show how RL further refines the IL model in domains far from the training data.

Masked Graph Auto-Encoder Constrained Graph Pooling

This work proposes a novel and accessible technique called Masked Graph Auto-encoder constrained Pooling (MGAP), which enables vanilla node drop pooling methods to retain sufficient effective graph information from both node-attribute and network-topology perspectives.

References

SHOWING 1-10 OF 76 REFERENCES

Hierarchical Graph Representation Learning with Differentiable Pooling

DiffPool is proposed, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion.

Deep Neural Networks for Learning Graph Representations

A novel model for learning graph representations, which generates a low-dimensional vector representation for each vertex by capturing the graph structural information directly, and which outperforms other stat-of-the-art models in such tasks.

Permutation Invariant Graph Generation via Score-Based Generative Modeling

A permutation invariant approach to modeling graphs, using the recent framework of score-based generative modeling, which design a permutation equivariant, multi-channel graph neural network to model the gradient of the data distribution at the input graph.

graph2vec: Learning Distributed Representations of Graphs

This work proposes a neural embedding framework named graph2vec to learn data-driven distributed representations of arbitrary sized graphs that achieves significant improvements in classification and clustering accuracies over substructure representation learning approaches and are competitive with state-of-the-art graph kernels.

Adversarially Regularized Graph Autoencoder for Graph Embedding

A novel adversarial graph embedding framework for graph data that encodes the topological structure and node content in a graph to a compact representation, on which a decoder is trained to reconstruct the graph structure.

Variational Graph Auto-Encoders

The variational graph auto-encoder (VGAE) is introduced, a framework for unsupervised learning on graph-structured data based on the variational auto- Encoder (VAE) that can naturally incorporate node features, which significantly improves predictive performance on a number of benchmark datasets.

GraphVAE: Towards Generation of Small Graphs Using Variational Autoencoders

This work proposes to sidestep hurdles associated with linearization of discrete structures by having a decoder output a probabilistic fully-connected graph of a predefined maximum size directly at once by formulated as a variational autoencoder.

Unsupervised Inductive Graph-Level Representation Learning via Graph-Graph Proximity

UGRAPHEMB is a general framework that provides a novel means to performing graph-level embedding in a completely unsupervised and inductive manner and achieves competitive accuracy in the tasks of graph classification, similarity ranking, and graph visualization.

Conditional Structure Generation through Graph Variational Generative Adversarial Nets

This work forms the novel problem of conditional structure generation, and proposes a novel unified model of graph variational generative adversarial nets (CondGen) to handle the intrinsic challenges of flexible context-structure conditioning and permutation-invariant generation.

A Comprehensive Survey on Graph Neural Networks

This article provides a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields and proposes a new taxonomy to divide the state-of-the-art GNNs into four categories, namely, recurrent GNNS, convolutional GNN’s, graph autoencoders, and spatial–temporal Gnns.
...