Boosting Graph Embedding on a Single GPU

@article{Aljundi2021BoostingGE,
  title={Boosting Graph Embedding on a Single GPU},
  author={Amro Alabsi Aljundi and Taha Atahan Akyildiz and Kamer Kaya},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.10049}
}
Graphs are ubiquitous, and they can model unique characteristics and complex relations of real-life systems. Although using machine learning (ML) on graphs is promising, their raw representation is not suitable for ML algorithms. Graph embedding represents each node of a graph as a d-dimensional vector which is more suitable for ML tasks. However, the embedding process is expensive, and CPU-based tools do not scale to real-world graphs. In this work, we present GOSH, a GPU-based tool for… 

References

SHOWING 1-10 OF 30 REFERENCES
GraphVite: A High-Performance CPU-GPU Hybrid System for Node Embedding
TLDR
This paper proposes GraphVite, a high-performance CPU-GPU hybrid system for training node embeddings, by co-optimizing the algorithm and the system, and proposes an efficient collaboration strategy to further reduce the synchronization cost between CPUs and GPUs.
Understanding Coarsening for Embedding Large-Scale Graphs
TLDR
The experiments with a state-of-the-art, fast graph embedding tool show that there is an interplay between the coarsening decisions taken and the embedding quality, and the cost of embedding significantly decreases when coarsened is employed.
EDGES: An Efficient Distributed Graph Embedding System on GPU Clusters
TLDR
This article develops an efficient distributed graph embedding system called EDGES, which can utilize GPU clusters to train large graph models with billions of nodes and trillions of edges using data and model parallelism, and proposes a novel dynamic partition architecture for training these models.
MILE: A Multi-Level Framework for Scalable Graph Embedding
TLDR
Experimental results on five large-scale datasets demonstrate that MILE significantly boosts the speed (order of magnitude) of graph embedding while generating embeddings of better quality, for the task of node classification.
PyTorch-BigGraph: A Large-scale Graph Embedding System
TLDR
PyTorch-BigGraph (PBG), an embedding system that incorporates several modifications to traditional multi-relation embedding systems that allow it to scale to graphs with billions of nodes and trillions of edges, is presented.
Graph Embedding Techniques, Applications, and Performance: A Survey
TLDR
A comprehensive and structured analysis of various graph embedding techniques proposed in the literature, and the open-source Python library, named GEM (Graph Embedding Methods, available at https://github.com/palash1992/GEM ), which provides all presented algorithms within a unified interface to foster and facilitate research on the topic.
ProNE: Fast and Scalable Network Representation Learning
TLDR
ProNE is a fast, scalable, and effective model, whose single-thread version is 10–400× faster than efficient network embedding benchmarks with 20 threads, including LINE, DeepWalk, node2vec, GraRep, and HOPE.
VERSE: Versatile Graph Embeddings from Similarity Measures
TLDR
VERtex Similarity Embeddings (VERSE), a simple, versatile, and memory-efficient method that derives graph embeddings explicitly calibrated to preserve the distributions of a selected vertex-to-vertex similarity measure, is proposed.
Deep Neural Networks for Learning Graph Representations
TLDR
A novel model for learning graph representations, which generates a low-dimensional vector representation for each vertex by capturing the graph structural information directly, and which outperforms other stat-of-the-art models in such tasks.
HARP: Hierarchical Representation Learning for Networks
TLDR
HARP is a general meta-strategy to improve all of the state-of-the-art neural algorithms for embedding graphs, including DeepWalk, LINE, and Node2vec, and it is demonstrated that applying HARP's hierarchical paradigm yields improved implementations for all three of these methods.
...
1
2
3
...