The Gromov-Wasserstein distance between networks and stable network invariants

@article{Chowdhury2018TheGD,
  title={The Gromov-Wasserstein distance between networks and stable network invariants},
  author={Samir Chowdhury and Facundo M{\'e}moli},
  journal={ArXiv},
  year={2018},
  volume={abs/1808.04337}
}
We define a metric---the network Gromov-Wasserstein distance---on weighted, directed networks that is sensitive to the presence of outliers. In addition to proving its theoretical properties, we supply network invariants based on optimal transport that approximate this distance by means of lower bounds. We test these methods on a range of simulated network datasets and on a dataset of real-world global bilateral migration. For our simulations, we define a network generative model based on the… Expand

Figures and Topics from this paper

Gromov-Wasserstein Learning for Graph Matching and Node Embedding
TLDR
A novel Gromov-Wasserstein learning framework is proposed to jointly match (align) graphs and learn embedding vectors for the associated graph nodes, and is applied to matching problems in real-world networks, and demonstrates its superior performance compared to alternative approaches. Expand
Learning Graphons via Structured Gromov-Wasserstein Barycenters
TLDR
It is shown that the cut distance of graphons can be relaxed to the Gromov-Wasserstein distance of their step functions, and the proposed approach overcomes drawbacks of prior state-of-the-art methods, and outperforms them on both synthetic and real-world data. Expand
Gromov-Wasserstein Averaging in a Riemannian Framework
  • Samir Chowdhury, Tom Needham
  • Mathematics, Computer Science
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
  • 2020
TLDR
A theoretical framework for performing statistical tasks on the space of (possibly asymmetric) matrices with arbitrary entries and sizes under the lens of the Gromov-Wasserstein distance is introduced, and the Riemannian framework of GW distances developed by Sturm is translated into practical, implementable tools for network data analysis. Expand
Sliced Gromov-Wasserstein
TLDR
A novel OT discrepancy is defined that can deal with large scale distributions via a slicing approach and is demonstrated to have ability to tackle similar problems as GW while being several order of magnitudes faster to compute. Expand
Scalable Gromov-Wasserstein Learning for Graph Partitioning and Matching
TLDR
This method is the first attempt to make Gromov-Wasserstein discrepancy applicable to large-scale graph analysis and unify graph partitioning and matching into the same framework, and outperforms state-of-the-art graph partitions and matching methods, achieving a trade-off between accuracy and efficiency. Expand
Multiplex Embedding of Biological Networks Using Topological Similarity of Different Layers
TLDR
The experimental results in the context of drug repositioning and drug-target prediction show that the embeddings computed by the resulting algorithm, Hattusha, consistently improve predictive accuracy over algorithms that do not take into account the topological similarity of different networks. Expand
Gromov-Wasserstein Factorization Models for Graph Clustering
  • H. Xu
  • Computer Science, Mathematics
  • AAAI
  • 2020
TLDR
An effective approximate algorithm is designed for learning this Gromov-Wasserstein factorization (GWF) model, unrolling loopy computations as stacked modules and computing gradients with backpropagation, which obtains encouraging results on clustering graphs. Expand
Mapper Comparison with Wasserstein Metrics
TLDR
An optimal transport based metric which is called the Network Augmented Wasserstein Distance for evaluating distances between Mapper graphs is developed and the value of the metric for model drift analysis is demonstrated by using the metric to transform the model drift problem into an anomaly detection problem over dynamic graphs. Expand
Distributional Sliced Embedding Discrepancy for Incomparable Distributions
TLDR
A novel approach for comparing two incomparable distributions is proposed, called distributional sliced embedding (DSE) discrepancy, that hinges on the idea of distributional slicing, embeddings, and on computing the closed-form Wassertein distance between the sliced distributions. Expand
Partial Gromov-Wasserstein Learning for Partial Graph Matching
TLDR
A partial Gromov-Wasserstein learning framework is proposed for partially matching two graphs, which fuses the partial Grosvenstein distance and the partial Wasserstein distance as the objective and updates the partial transport map and the node embedding in an alternating fashion. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 62 REFERENCES
Distances and Isomorphism between Networks and the Stability of Network Invariants
TLDR
The theoretical foundations of a network distance that has recently been applied to various subfields of topological data analysis, namely persistent homology and hierarchical clustering are developed, and easily-computable lower bounds are developed that are effective in distinguishing between networks. Expand
Gromov-Wasserstein Averaging of Kernel and Distance Matrices
TLDR
This paper presents a new technique for computing the barycenter of a set of distance or kernel matrices, which define the interrelationships between points sampled from individual domains, and provides a fast iterative algorithm for the resulting nonconvex optimization problem. Expand
Optimal Transport for structured data with application on graphs
TLDR
A new transportation distance is considered that minimizes a total cost of transporting probability masses and is consequently called Fused Gromov-Wasserstein (FGW), which shows results on a graph classification task, where the method outperforms both graph kernels and deep graph convolutional networks. Expand
Optimal Transport for structured data
TLDR
This paper proposes a new optimal transport distance, called the Fused Gromov-Wasserstein distance, capable of leveraging both structural and feature information by combining both views and prove its metric properties over very general manifolds. Expand
Persistent Path Homology of Directed Networks
TLDR
Stability of PPH is proved by utilizing a separate theory of homotopy of digraphs that is compatible with path homology, and an algorithm is derived showing that over field coefficients, computing PPH requires the same worst case running time as standard persistent homology. Expand
Gromov–Wasserstein Distances and the Metric Approach to Object Matching
  • F. Mémoli
  • Mathematics, Computer Science
  • Found. Comput. Math.
  • 2011
TLDR
This paper discusses certain modifications of the ideas concerning the Gromov–Hausdorff distance which have the goal of modeling and tackling the practical problems of object matching and comparison by proving explicit lower bounds for the proposed distance that involve many of the invariants previously reported by researchers. Expand
Sinkhorn Distances: Lightspeed Computation of Optimal Transport
TLDR
This work smooths the classic optimal transport problem with an entropic regularization term, and shows that the resulting optimum is also a distance which can be computed through Sinkhorn's matrix scaling algorithm at a speed that is several orders of magnitude faster than that of transport solvers. Expand
Edge Weight Prediction in Weighted Signed Networks
TLDR
This paper proposes two novel measures of node behavior: the goodness of a node intuitively captures how much this node is liked/trusted by other nodes, while the fairness of a nodes captures how fair the node is in rating other nodes' likeability or trust level. Expand
Interpolating between Optimal Transport and MMD using Sinkhorn Divergences
TLDR
This paper studies the Sinkhorn Divergences, a family of geometric divergences that interpolates between MMD and OT, and provides theoretical guarantees for positivity, convexity and metrization of the convergence in law. Expand
Community detection and stochastic block models: recent developments
  • E. Abbe
  • Mathematics, Computer Science
  • J. Mach. Learn. Res.
  • 2017
TLDR
The recent developments that establish the fundamental limits for community detection in the stochastic block model are surveyed, both with respect to information-theoretic and computational thresholds, and for various recovery requirements such as exact, partial and weak recovery. Expand
...
1
2
3
4
5
...