word2vec, node2vec, graph2vec, X2vec: Towards a Theory of Vector Embeddings of Structured Data

@article{Grohe2020word2vecNG,
  title={word2vec, node2vec, graph2vec, X2vec: Towards a Theory of Vector Embeddings of Structured Data},
  author={Martin Grohe},
  journal={Proceedings of the 39th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems},
  year={2020}
}
  • Martin Grohe
  • Published 27 March 2020
  • Computer Science
  • Proceedings of the 39th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems
Vector representations of graphs and relational structures, whether hand-crafted feature vectors or learned representations, enable us to apply standard data analysis and machine learning techniques to the structures. A wide range of methods for generating such embeddings have been studied in the machine learning and knowledge representation literature. However, vector embeddings have received relatively little attention from a theoretical point of view. Starting with a survey of embedding… 

Figures from this paper

On the Surprising Behaviour of node2vec

This work focuses on node2vec, one of the most prominent graph embedding schemes, and analyses its embedding quality from multiple perspectives to indicate that embeddingquality is unstable with respect to parameter choices.

ripple2vec: Node Embedding with Ripple Distance of Structures

Experimental results on real datasets show that the results of the proposed node embedding method, named as ripple2vec, behave better than those of state-of-the-art methods, in node clustering and node classification, and are competitive to other methods in link prediction.

Weisfeiler and Leman go sparse: Towards scalable higher-order graph embeddings

The experimental study confirms that the local algorithms, both kernel and neural architectures, lead to vastly reduced computation times, and prevent overfitting, and the kernel version establishes a new state-of-the-art for graph classification on a wide range of benchmark datasets.

Exploiting Class Labels to Boost Performance on Embedding-based Text Classification

A weighting scheme, Term Frequency-Category Ratio (TF-CR), which can weight high-frequency, category-exclusive words higher when computing word embeddings, leading to improved performance scores over the well-known weighting schemes TF-IDF and KLD as well as over the absence of a weighted scheme in most cases.

Creativity Embedding: A Vector to Characterise and Classify Plausible Triples in Deep Learning NLP Models

The creativity embedding of a text based on four self-assessment creativity metrics, namely diversity, novelty, serendipity and magnitude, knowledge graphs, and neural networks, is defined.

Dynamic Database Embeddings with FoRWaRD

FoRWaRD is comparable and sometimes superior to state-of-the-art embeddings in the static (traditional) setting and in the dynamic setting FoR WaRD outperforms the alternatives consistently and often considerably, and features only a mild reduction of quality even when the database consists of mostly newly inserted tuples.

TF-CR: Weighting Embeddings for Text Classification

A novel weighting scheme is introduced, Term Frequency-Category Ratio (TF-CR), which can weight high-frequency, category-exclusive words higher when computing word embeddings, leading to improved performance scores over existing weighting schemes, with a performance gap that increases as the size of the training data grows.

Network representation learning based on social similarities

This paper investigates a novel social similarity-based method for learning network representations that is able to maintain both structural similarity of nodes and domain similarity and outperforms the state-of-the-art solutions.

Scaling up graph homomorphism for classification via sampling

This paper proposes a high-performance implementation of a simple sampling algorithm which computes additive approximations of homomorphism densities and shows in experiments on synthetic data that this algorithm scales to very large graphs when implemented with Bloom filters.

Graph Homomorphism Features: Why Not Sample?

This work-in-progress paper attempts to make this methodology scalable by obtaining additive approximations to graph homomorphism densities via a simple sampling algorithm, and shows in experiments that these approximate homomorphISM densities perform as well as homomorphicism numbers on standard graph classification datasets.
...

References

SHOWING 1-10 OF 117 REFERENCES

Translating Embeddings for Modeling Multi-relational Data

TransE is proposed, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities, which proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases.

Inductive Representation Learning on Large Graphs

GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.

graph2vec: Learning Distributed Representations of Graphs

This work proposes a neural embedding framework named graph2vec to learn data-driven distributed representations of arbitrary sized graphs that achieves significant improvements in classification and clustering accuracies over substructure representation learning approaches and are competitive with state-of-the-art graph kernels.

node2vec: Scalable Feature Learning for Networks

In node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks, a flexible notion of a node's network neighborhood is defined and a biased random walk procedure is designed, which efficiently explores diverse neighborhoods.

Deep Neural Networks for Learning Graph Representations

A novel model for learning graph representations, which generates a low-dimensional vector representation for each vertex by capturing the graph structural information directly, and which outperforms other stat-of-the-art models in such tasks.

GraRep: Learning Graph Representations with Global Structural Information

A novel model for learning vertex representations of weighted graphs that integrates global structural information of the graph into the learning process and significantly outperforms other state-of-the-art methods in such tasks.

Representation Learning on Graphs: Methods and Applications

A conceptual review of key advancements in this area of representation learning on graphs, including matrix factorization-based methods, random-walk based algorithms, and graph neural networks are provided.

A Three-Way Model for Collective Learning on Multi-Relational Data

This work presents a novel approach to relational learning based on the factorization of a three-way tensor that is able to perform collective learning via the latent components of the model and provide an efficient algorithm to compute the factorizations.

Variational Graph Auto-Encoders

The variational graph auto-encoder (VGAE) is introduced, a framework for unsupervised learning on graph-structured data based on the variational auto- Encoder (VAE) that can naturally incorporate node features, which significantly improves predictive performance on a number of benchmark datasets.

DeepWalk: online learning of social representations

DeepWalk is an online learning algorithm which builds useful incremental results, and is trivially parallelizable, which make it suitable for a broad class of real world applications such as network classification, and anomaly detection.
...