• Corpus ID: 231847056

# Graph Traversal with Tensor Functionals: A Meta-Algorithm for Scalable Learning

@article{Markowitz2021GraphTW,
title={Graph Traversal with Tensor Functionals: A Meta-Algorithm for Scalable Learning},
author={Elan Markowitz and Keshav Balasubramanian and Mehrnoosh Mirtaheri and Sami Abu-El-Haija and Bryan Perozzi and Greg Ver Steeg and A. G. Galstyan},
journal={ArXiv},
year={2021},
volume={abs/2102.04350}
}
• Published 8 February 2021
• Computer Science
• ArXiv
Graph Representation Learning (GRL) methods have impacted fields from chemistry to social science. However, their algorithmic implementations are specialized to specific use-cases e.g. message passing methods are run differently from node embedding ones. Despite their apparent differences, all these methods utilize the graph structure, and therefore, their learning can be approximated with stochastic graph traversals. We propose Graph Traversal via Tensor Functionals (GTTF), a unifying meta…

## Figures and Tables from this paper

• Computer Science
NeurIPS
• 2021
This paper designs a framework that computes SVD of implicitly defined matrices, and applies this framework to several GRL tasks, and derives linear approximation of a SOTA model, which shows competitive empirical test performance over various graphs such as article citation and biological interaction networks.
• Computer Science
ICML
• 2021
GNNAutoScale (GAS), a framework for scaling arbitrary message-passing GNNs to large graphs, is presented, which is both fast and memory-efficient, learns expressive node representations, closely resembles the performance of their non-scaling counterparts, and reaches state-of-the-art performance on large-scale graphs.
• Computer Science
ArXiv
• 2021
This work proposes efficient GRL methods that optimize convexified objectives with known closed form solutions that achieves competitive or state-ofthe-art performance on popular GRL tasks while providing orders of magnitude speedup.
• Computer Science
ArXiv
• 2021
A graph neural network framework with a novel scalable Directed Mixed Path Aggregation (DIMPA) scheme to obtain node embeddings for directed networks in a self-supervised manner, including a novel probabilistic imbalance loss.
• Computer Science
SDM
• 2022
A novel probabilistic balanced normalized cut loss for training nodes in a GNN framework for semi-supervised signed network clustering, called SSSNET, which has node clustering as main focus, with an emphasis on polarization effects arising in networks.
• Computer Science
• 2021
DIGRAC optimizes directed flow imbalance for clustering without requiring label supervision, like existing GNN methods, and can naturally incorporate node features, unlike existing spectral methods.
• Computer Science
ArXiv
• 2022
The TF-GNN data model, its Keras modeling API, and relevant capabilities such as graph sampling, distributed training, and accelerator support are described.
• Computer Science
NAACL-HLT
• 2022
StATIK uses Language Models to extract the semantic information from text descriptions, while using Message Passing Neural Networks to capture the structural information and achieves state of the art results on three challenging inductive baselines.
• Computer Science
• 2022
This work proposes a memory-efﬁcient framework called “EXACT”, which is an optimized GPU implementation which supports training GNNs with compressed activations and implements EXACT as an extension for Pytorch Geometric andPytorch.
• Computer Science
ArXiv
• 2021
FedGraphNN is an open research federated learning system and the benchmark to facilitate GNN-based FL research, built on a uniﬁed formulation of federated GNNs and supports commonly used datasets, GNN models, FL algorithms, and ﬂexible APIs.

## References

SHOWING 1-10 OF 25 REFERENCES

• Computer Science
NeurIPS
• 2018
This paper proposes a novel attention model on the power series of the transition matrix, which guides the random walk to optimize an upstream objective and improves state-of-the-art results on a comprehensive suite of real-world graph datasets including social, collaboration, and biological networks.
• Computer Science
NIPS
• 2017
GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.
• Computer Science
KDD
• 2019
Cluster-GCN is proposed, a novel GCN algorithm that is suitable for SGD-based training by exploiting the graph clustering structure and allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy.
• Computer Science
ICLR
• 2019
Deep Graph Infomax (DGI) is presented, a general approach for learning node representations within graph-structured data in an unsupervised manner that is readily applicable to both transductive and inductive learning setups.
• Computer Science
ICLR
• 2018
We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior
• Jie Chen
• Computer Science
ICLR
• 2018
Enhanced with importance sampling, FastGCN not only is efficient for training but also generalizes well for inference, and is orders of magnitude more efficient while predictions remain comparably accurate.
• Computer Science
KDD
• 2016
In node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks, a flexible notion of a node's network neighborhood is defined and a biased random walk procedure is designed, which efficiently explores diverse neighborhoods.
• Computer Science
ArXiv
• 2019
PyTorch Geometric is introduced, a library for deep learning on irregularly structured input data such as graphs, point clouds and manifolds, built upon PyTorch, and a comprehensive comparative study of the implemented methods in homogeneous evaluation scenarios is performed.
• Computer Science
ICML
• 2020
The GCNII is proposed, an extension of the vanilla GCN model with two simple yet effective techniques: {\em Initial residual} and {\em Identity mapping} that effectively relieves the problem of over-smoothing.
• Computer Science
NeurIPS
• 2020
The OGB datasets are large-scale, encompass multiple important graph ML tasks, and cover a diverse range of domains, ranging from social and information networks to biological networks, molecular graphs, source code ASTs, and knowledge graphs, indicating fruitful opportunities for future research.