• Corpus ID: 235294221

Evaluation Metrics for Graph Generative Models: Problems, Pitfalls, and Practical Solutions

@article{OBray2021EvaluationMF,
  title={Evaluation Metrics for Graph Generative Models: Problems, Pitfalls, and Practical Solutions},
  author={Leslie O’Bray and Max Horn and Bastian Alexander Rieck and Karsten M. Borgwardt},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.01098}
}
Graph generative models are a highly active branch of machine learning. Given the steady development of new models of ever-increasing complexity, it is necessary to provide a principled way to evaluate and compare them. In this paper, we enumerate the desirable criteria for such a comparison metric and provide an overview of the status quo of graph generative model comparison in use today, which predominantly relies on the maximum mean discrepancy (MMD). We perform a systematic evaluation of… 
On Evaluation Metrics for Graph Generative Models
TLDR
This work studies existing GGM metrics and neural-network-based metrics emerging from generative models of images that use embeddings extracted from a task-specific network, and introduces several metrics based on the features extracted by an untrained random GNN.
Evaluating Graph Generative Models with Contrastively Learned Features
TLDR
It is demonstrated that Graph Substructure Networks (GSNs), which in a way combine both approaches, are better at distinguishing the distances between graph datasets.
Score-based Generative Modeling of Graphs via the System of Stochastic Differential Equations
TLDR
A novel score-based generative model for graphs with a continuous-time framework is proposed that is able to generate molecules that lie close to the training distribution yet do not violate the chemical valency rule, demonstrating the effectiveness of the system of SDEs in modeling the node-edge relationships.
GraphDCA - a Framework for Node Distribution Comparison in Real and Synthetic Graphs
We argue that when comparing two graphs, the distribution of node structural features is more informative than global graph statistics which are often used in practice, especially to evaluate graph

References

SHOWING 1-10 OF 55 REFERENCES
On Evaluation Metrics for Graph Generative Models
TLDR
This work studies existing GGM metrics and neural-network-based metrics emerging from generative models of images that use embeddings extracted from a task-specific network, and introduces several metrics based on the features extracted by an untrained random GNN.
GraphGen: A Scalable Approach to Domain-agnostic Labeled Graph Generation
TLDR
Extensive experiments on million-sized, real graph datasets show GraphGen to be 4 times faster on average than state-of-the-art techniques while being significantly better in quality across a comprehensive set of 11 different metrics.
GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Models
TLDR
The experiments show that GraphRNN significantly outperforms all baselines, learning to generate diverse graphs that match the structural characteristics of a target set, while also scaling to graphs 50 times larger than previous deep models.
Graphgen-redux: a Fast and Lightweight Recurrent Model for labeled Graph Generation
  • Marco Podda, D. Bacciu
  • Computer Science
    2021 International Joint Conference on Neural Networks (IJCNN)
  • 2021
TLDR
A novel graph preprocessing approach is introduced that is able to process the labeling information of both nodes and edges jointly and improves upon the generative performances of GRAPHGEN in a wide range of datasets of chemical and social graphs.
Efficient Graph Generation with Graph Recurrent Attention Networks
TLDR
A new family of efficient and expressive deep generative models of graphs, called Graph Recurrent Attention Networks (GRANs), which better captures the auto-regressive conditioning between the already-generated and to-be-generated parts of the graph using Graph Neural Networks (GNNs) with attention.
Order Matters: Probabilistic Modeling of Node Sequence for Graph Generation
TLDR
This work derives the exact joint probability over the graph and the node ordering of the sequential process from the joint, and approximately marginalize out the node orderings and compute a lower bound on the log-likelihood using variational inference.
Permutation Invariant Graph Generation via Score-Based Generative Modeling
TLDR
A permutation invariant approach to modeling graphs, using the recent framework of score-based generative modeling, which design a permutation equivariant, multi-channel graph neural network to model the gradient of the data distribution at the input graph.
A Test of Relative Similarity For Model Selection in Generative Models
TLDR
A statistical test of relative similarity is introduced, which is used to determine which of two models generates samples that are significantly closer to a real-world reference dataset of interest.
Scalable Deep Generative Modeling for Sparse Graphs
TLDR
This work develops a novel autoregressive model, named BiGG, that utilizes this sparsity to avoid generating the full adjacency matrix, and importantly reduces the graph generation time complexity to $O((n + m)\log n)$.
TG-GAN: Continuous-time Temporal Graph Deep Generative Models with Time-Validity Constraints
TLDR
The “Temporal Graph Generative Adversarial Network” (TG-GAN) is proposed, which can jointly generate the time, node, and edge information for truncated temporal walks via a novel recurrent-based model and a valid time decoder, and significantly outperforms five benchmarking methods in terms of efficiency and effectiveness.
...
...