# Graph Traversal with Tensor Functionals: A Meta-Algorithm for Scalable Learning

@article{Markowitz2021GraphTW, title={Graph Traversal with Tensor Functionals: A Meta-Algorithm for Scalable Learning}, author={Elan Markowitz and Keshav Balasubramanian and Mehrnoosh Mirtaheri and Sami Abu-El-Haija and Bryan Perozzi and Greg Ver Steeg and A. G. Galstyan}, journal={ArXiv}, year={2021}, volume={abs/2102.04350} }

Graph Representation Learning (GRL) methods have impacted fields from chemistry to social science. However, their algorithmic implementations are specialized to specific use-cases e.g. message passing methods are run differently from node embedding ones. Despite their apparent differences, all these methods utilize the graph structure, and therefore, their learning can be approximated with stochastic graph traversals. We propose Graph Traversal via Tensor Functionals (GTTF), a unifying meta…

## 14 Citations

### Implicit SVD for Graph Representation Learning

- Computer ScienceNeurIPS
- 2021

This paper designs a framework that computes SVD of implicitly defined matrices, and applies this framework to several GRL tasks, and derives linear approximation of a SOTA model, which shows competitive empirical test performance over various graphs such as article citation and biological interaction networks.

### GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings

- Computer ScienceICML
- 2021

GNNAutoScale (GAS), a framework for scaling arbitrary message-passing GNNs to large graphs, is presented, which is both fast and memory-efficient, learns expressive node representations, closely resembles the performance of their non-scaling counterparts, and reaches state-of-the-art performance on large-scale graphs.

### Fast Graph Learning with Unique Optimal Solutions

- Computer ScienceArXiv
- 2021

This work proposes efficient GRL methods that optimize convexified objectives with known closed form solutions that achieves competitive or state-ofthe-art performance on popular GRL tasks while providing orders of magnitude speedup.

### DIGRAC: Digraph Clustering with Flow Imbalance

- Computer ScienceArXiv
- 2021

A graph neural network framework with a novel scalable Directed Mixed Path Aggregation (DIMPA) scheme to obtain node embeddings for directed networks in a self-supervised manner, including a novel probabilistic imbalance loss.

### SSSNET: Semi-Supervised Signed Network Clustering

- Computer ScienceSDM
- 2022

A novel probabilistic balanced normalized cut loss for training nodes in a GNN framework for semi-supervised signed network clustering, called SSSNET, which has node clustering as main focus, with an emphasis on polarization effects arising in networks.

### DIGRAC: Digraph Clustering Based on Flow Imbalance

- Computer Science
- 2021

DIGRAC optimizes directed flow imbalance for clustering without requiring label supervision, like existing GNN methods, and can naturally incorporate node features, unlike existing spectral methods.

### TF-GNN: Graph Neural Networks in TensorFlow

- Computer ScienceArXiv
- 2022

The TF-GNN data model, its Keras modeling API, and relevant capabilities such as graph sampling, distributed training, and accelerator support are described.

### StATIK: Structure and Text for Inductive Knowledge Graph Completion

- Computer ScienceNAACL-HLT
- 2022

StATIK uses Language Models to extract the semantic information from text descriptions, while using Message Passing Neural Networks to capture the structural information and achieves state of the art results on three challenging inductive baselines.

### EXACT: S CALABLE G RAPH N EURAL N ETWORKS T RAINING VIA E XTREME A CTIVATION C OMPRESSION

- Computer Science
- 2022

This work proposes a memory-efﬁcient framework called “EXACT”, which is an optimized GPU implementation which supports training GNNs with compressed activations and implements EXACT as an extension for Pytorch Geometric andPytorch.

### FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks

- Computer ScienceArXiv
- 2021

FedGraphNN is an open research federated learning system and the benchmark to facilitate GNN-based FL research, built on a uniﬁed formulation of federated GNNs and supports commonly used datasets, GNN models, FL algorithms, and ﬂexible APIs.

## References

SHOWING 1-10 OF 25 REFERENCES

### Watch Your Step: Learning Node Embeddings via Graph Attention

- Computer ScienceNeurIPS
- 2018

This paper proposes a novel attention model on the power series of the transition matrix, which guides the random walk to optimize an upstream objective and improves state-of-the-art results on a comprehensive suite of real-world graph datasets including social, collaboration, and biological networks.

### Inductive Representation Learning on Large Graphs

- Computer ScienceNIPS
- 2017

GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.

### Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks

- Computer ScienceKDD
- 2019

Cluster-GCN is proposed, a novel GCN algorithm that is suitable for SGD-based training by exploiting the graph clustering structure and allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy.

### Deep Graph Infomax

- Computer ScienceICLR
- 2019

Deep Graph Infomax (DGI) is presented, a general approach for learning node representations within graph-structured data in an unsupervised manner that is readily applicable to both transductive and inductive learning setups.

### Graph Attention Networks

- Computer ScienceICLR
- 2018

We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior…

### FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling

- Computer ScienceICLR
- 2018

Enhanced with importance sampling, FastGCN not only is efficient for training but also generalizes well for inference, and is orders of magnitude more efficient while predictions remain comparably accurate.

### node2vec: Scalable Feature Learning for Networks

- Computer ScienceKDD
- 2016

In node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks, a flexible notion of a node's network neighborhood is defined and a biased random walk procedure is designed, which efficiently explores diverse neighborhoods.

### Fast Graph Representation Learning with PyTorch Geometric

- Computer ScienceArXiv
- 2019

PyTorch Geometric is introduced, a library for deep learning on irregularly structured input data such as graphs, point clouds and manifolds, built upon PyTorch, and a comprehensive comparative study of the implemented methods in homogeneous evaluation scenarios is performed.

### Simple and Deep Graph Convolutional Networks

- Computer ScienceICML
- 2020

The GCNII is proposed, an extension of the vanilla GCN model with two simple yet effective techniques: {\em Initial residual} and {\em Identity mapping} that effectively relieves the problem of over-smoothing.

### Open Graph Benchmark: Datasets for Machine Learning on Graphs

- Computer ScienceNeurIPS
- 2020

The OGB datasets are large-scale, encompass multiple important graph ML tasks, and cover a diverse range of domains, ranging from social and information networks to biological networks, molecular graphs, source code ASTs, and knowledge graphs, indicating fruitful opportunities for future research.