• Corpus ID: 43924638

Constrained Graph Variational Autoencoders for Molecule Design

@inproceedings{Liu2018ConstrainedGV,
  title={Constrained Graph Variational Autoencoders for Molecule Design},
  author={Qi Liu and Miltiadis Allamanis and Marc Brockschmidt and Alexander L. Gaunt},
  booktitle={NeurIPS},
  year={2018}
}
Graphs are ubiquitous data structures for representing interactions between entities. [] Key Method Our decoder assumes a sequential ordering of graph extension steps and we discuss and analyze design choices that mitigate the potential downsides of this linearization. Experiments compare our approach with a wide range of baselines on the molecule generation task and show that our method is more successful at matching the statistics of the original dataset on semantically important metrics. Furthermore, we…

Figures from this paper

Conditional Constrained Graph Variational Autoencoders for Molecule Design
TLDR
This work presents Conditional Constrained Graph Variational Autoencoder (CCGVAE), a model that implements this key-idea in a state-of-the-art model, and shows improved results on several evaluation metrics on two commonly adopted datasets for molecule generation.
LATION FOR MOLECULAR OPTIMIZATION
TLDR
Diverse output distributions in the model are explicitly realized by low-dimensional latent vectors that modulate the translation process and show that the model outperforms previous state-of-the-art baselines.
NeVAE: A Deep Generative Model for Molecular Graphs
TLDR
A novel variational autoencoder for molecular graphs is proposed, whose encoder and decoder are specially designed to account for the above properties by means of several technical innovations.
3DMolNet: A Generative Network for Molecular Structures
TLDR
This work proposes a new approach to efficiently generate molecular structures that are not restricted to a fixed size or composition, based on the variational autoencoder which learns a translation-, rotation-, and permutation-invariant low-dimensional representation of molecules.
Learning Multimodal Graph-to-Graph Translation for Molecular Optimization
TLDR
Diverse output distributions in the model are explicitly realized by low-dimensional latent vectors that modulate the translation process and show that the model outperforms previous state-of-the-art baselines.
Gravity-Inspired Graph Autoencoders for Directed Link Prediction
TLDR
This paper presents a new gravity-inspired decoder scheme that can effectively reconstruct directed graphs from a node embedding, and achieves competitive results on three real-world graphs, outperforming several popular baselines.
Likelihood-Free Inference and Generation of Molecular Graphs
TLDR
The approach extends generative adversarial networks by including an adversarial cycle-consistency loss to implicitly enforce the reconstruction property and demonstrates that LF-MolGAN more accurately learns the distribution over the space of molecules than all baselines.
Physics-Constrained Predictive Molecular Latent Space Discovery with Graph Scattering Variational Autoencoder
TLDR
This work presents a quantitative assessment of the latent space in terms of its predictive ability for organic molecules in the QM9 dataset and considers a Bayesian formalism to account for the limited size training data set.
D-VAE: A Variational Autoencoder for Directed Acyclic Graphs
TLDR
This paper proposes an asynchronous message passing scheme that allows encoding the computations on DAGs, rather than using existing simultaneous message passing schemes to encode local graph structures, and proposes a novel DAG variational autoencoder (D-VAE).
Simple and Effective Graph Autoencoders with One-Hop Linear Models
TLDR
It is shown that GCN encoders are actually unnecessarily complex for many applications, and proposed to replace them by significantly simpler and more interpretable linear models w.r.t. the direct neighborhood (one-hop) adjacency matrix of the graph, involving fewer operations, fewer parameters and no activation function.
...
...

References

SHOWING 1-10 OF 36 REFERENCES
Junction Tree Variational Autoencoder for Molecular Graph Generation
TLDR
The junction tree variational autoencoder generates molecular graphs in two phases, by first generating a tree-structured scaffold over chemical substructures, and then combining them into a molecule with a graph message passing network, which allows for incrementally expand molecules while maintaining chemical validity at every step.
Designing Random Graph Models Using Variational Autoencoders With Applications to Chemical Design
TLDR
Experiments reveal that the proposed variational autoencoder for graphs is able to learn and mimic the generative process of several well-known random graph models and can be used to create new molecules more effectively than several state of the art methods.
Learning Deep Generative Models of Graphs
TLDR
This work is the first and most general approach for learning generative models over arbitrary graphs, and opens new directions for moving away from restrictions of vector- and sequence-like knowledge representations, toward more expressive and flexible relational data structures.
GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Models
TLDR
The experiments show that GraphRNN significantly outperforms all baselines, learning to generate diverse graphs that match the structural characteristics of a target set, while also scaling to graphs 50 times larger than previous deep models.
GraphRNN: A Deep Generative Model for Graphs
TLDR
The experiments show that GraphRNN significantly outperforms all baselines, learning to generate diverse graphs that match the structural characteristics of a target set, while also scaling to graphs 50 times larger than previous deep models.
Gated Graph Sequence Neural Networks
TLDR
This work studies feature learning techniques for graph-structured inputs and achieves state-of-the-art performance on a problem from program verification, in which subgraphs need to be matched to abstract data structures.
Learning Graphical State Transitions
TLDR
The Gated Graph Transformer Neural Network (GGTNN), an extension of GGS-NNs that uses graph-structured data as an intermediate representation that can learn to construct and modify graphs in sophisticated ways based on textual input, and also to use the graphs to produce a variety of outputs.
Tackling Over-pruning in Variational Autoencoders
TLDR
The epitomic variational autoencoder (eVAE) is proposed, which makes efficient use of model capacity and generalizes better than VAE and helps prevent inactive units since each group is pressured to explain the data.
Neural Message Passing for Quantum Chemistry
TLDR
Using MPNNs, state of the art results on an important molecular property prediction benchmark are demonstrated and it is believed future work should focus on datasets with larger molecules or more accurate ground truth labels.
Grammar Variational Autoencoder
TLDR
Surprisingly, it is shown that not only does the model more often generate valid outputs, it also learns a more coherent latent space in which nearby points decode to similar discrete outputs.
...
...