Distance Metric Learning for Graph Structured Data

@article{Yoshida2021DistanceML,
  title={Distance Metric Learning for Graph Structured Data},
  author={Tomoki Yoshida and Ichiro Takeuchi and Masayuki Karasuyama},
  journal={Mach. Learn.},
  year={2021},
  volume={110},
  pages={1765-1811}
}
Graphs are versatile tools for representing structured data. Therefore, a variety of machine learning methods have been studied for graph data analysis. Although many of those learning methods depend on the measurement of differences between input graphs, defining an appropriate distance metric for a graph remains a controversial issue. Hence, we propose a supervised distance metric learning method for the graph classification problem. Our method, named interpretable graph metric learning (IGML… Expand
VEDesc: vertex-edge constraint on local learned descriptors
TLDR
The core idea is to design a triplet loss function of vertex-edge constraint (VEC), which takes the correlation between two descriptors of a patch into account, and to minimize the non-matching descriptors’ influence, an exponential algorithm to reduce the difference between the long and short sides. Expand

References

SHOWING 1-10 OF 88 REFERENCES
Learning Interpretable Metric between Graphs: Convex Formulation and Computation with Graph Mining
TLDR
This work proposes a novel supervised metric learning method for a subgraph-based distance, called interpretable graph metric learning (IGML), which optimizes the distance function in such a way that a small number of important subgraphs can be adaptively selected. Expand
graph2vec: Learning Distributed Representations of Graphs
TLDR
This work proposes a neural embedding framework named graph2vec to learn data-driven distributed representations of arbitrary sized graphs that achieves significant improvements in classification and clustering accuracies over substructure representation learning approaches and are competitive with state-of-the-art graph kernels. Expand
Optimal Transport for structured data with application on graphs
TLDR
A new transportation distance is considered that minimizes a total cost of transporting probability masses and is consequently called Fused Gromov-Wasserstein (FGW), which shows results on a graph classification task, where the method outperforms both graph kernels and deep graph convolutional networks. Expand
Graph Classification using Structural Attention
TLDR
This work presents a novel RNN model, called the Graph Attention Model (GAM), that processes only a portion of the graph by adaptively selecting a sequence of "informative" nodes, and shows that the proposed method is competitive against various well-known methods in graph classification. Expand
Automatic learning of cost functions for graph edit distance
TLDR
A method to automatically learn cost functions from a labeled sample set of graphs using an Expectation Maximization algorithm to model graph variations by structural distortion operations and derive the desired cost functions is proposed. Expand
A Fast Kernel for Attributed Graphs
TLDR
A fast graph kernel, the descriptor matching (DM) kernel, for graphs with both categorical and numerical attributes is proposed, which shows promising performance in both accuracy and efficiency. Expand
Hunt For The Unique, Stable, Sparse And Fast Feature Learning On Graphs
TLDR
Through extensive experiments, it is shown that a simple SVM based classification algorithm, driven with the discovery of family of graph spectral distances and their based graph feature representations, significantly outperforms all the more sophisticated state-of-art algorithms on the unlabeled node datasets in terms of both accuracy and speed. Expand
Shortest-path kernels on graphs
  • K. Borgwardt, H. Kriegel
  • Mathematics, Computer Science
  • Fifth IEEE International Conference on Data Mining (ICDM'05)
  • 2005
TLDR
This work proposes graph kernels based on shortest paths, which are computable in polynomial time, retain expressivity and are still positive definite, and shows significantly higher classification accuracy than walk-based kernels. Expand
Efficient Dual Approach to Distance Metric Learning
TLDR
A significantly more efficient and scalable approach to the metric learning problem based on the Lagrange dual formulation of the problem, which allows much larger Mahalanobis metric learning problems to be solved. Expand
A Short Survey of Recent Advances in Graph Matching
TLDR
The aim is to provide a systematic and compact framework regarding the recent development and the current state-of-the-arts in graph matching. Expand
...
1
2
3
4
5
...