• Corpus ID: 220496607

Distributed Graph Convolutional Networks

@article{Scardapane2020DistributedGC,
  title={Distributed Graph Convolutional Networks},
  author={Simone Scardapane and Indro Spinelli and Paolo Di Lorenzo},
  journal={ArXiv},
  year={2020},
  volume={abs/2007.06281}
}
The aim of this work is to develop a fully-distributed algorithmic framework for training graph convolutional networks (GCNs). The proposed method is able to exploit the meaningful relational structure of the input data, which are collected by a set of agents that communicate over a sparse network topology. After formulating the centralized GCN training problem, we first show how to make inference in a distributed scenario where the underlying data graph is split among different agents. Then… 

Figures and Tables from this paper

Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks
TLDR
A communication-efficient distributed GNN training technique named Learn Locally, Correct Globally (LLCG), which can significantly improve the efficiency without hurting the performance and rigorously analyzes the convergence of distributed methods with periodic model averaging for training GNNs.
FedGCN: Convergence and Communication Tradeoffs in Federated Training of Graph Convolutional Networks
TLDR
Federated Graph Convolutional Network (FedGCN) is introduced, which uses federated learning to train GCN models for semi-supervised node classification on large graphs with optimized convergence rate and communication cost.
On the Equivalence of Decoupled Graph Convolution Network and Label Propagation
TLDR
A new label propagation method named Propagation then Training Adaptively (PTA) is proposed, which overcomes the flaws of the decoupled GCN with a dynamic and adaptive weighting strategy.
Federated Graph Learning - A Position Paper
TLDR
Considering how graph data are distributed among clients, a categorization of four types of FGL is proposed: intergraph FL, intra-graph FL and graph-structured FL, where intra- graph is further divided into horizontal and vertical FGL.
FedGraph: Federated Graph Learning With Intelligent Sampling
TLDR
FedGraph is proposed, an intelligent graph sampling algorithm based on deep reinforcement learning, which can automatically converge to the optimal sampling policies that balance training speed and accuracy and significantly outperforms existing work by enabling faster convergence to higher accuracy.

References

SHOWING 1-10 OF 53 REFERENCES
A Framework for Parallel and Distributed Training of Neural Networks
How Powerful are Graph Neural Networks?
TLDR
This work characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures, and develops a simple architecture that is provably the most expressive among the class of GNNs.
Online Distributed Learning Over Graphs With Multitask Graph-Filter Models
TLDR
A preconditioned graph diffusion LMS algorithm for adaptive and distributed estimation of graph filters from streaming data and an unsupervised clustering method for splitting the global estimation problem into local ones is introduced.
A Comprehensive Survey on Graph Neural Networks
TLDR
This article provides a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields and proposes a new taxonomy to divide the state-of-the-art GNNs into four categories, namely, recurrent GNNS, convolutional GNN’s, graph autoencoders, and spatial–temporal Gnns.
Graph Attention Networks
We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior
The Graph Neural Network Model
TLDR
A new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains, and implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space.
Adaptation, Learning, and Optimization over Networks
  • A. Sayed
  • Computer Science
    Found. Trends Mach. Learn.
  • 2014
TLDR
The limits of performance of distributed solutions are examined and procedures that help bring forth their potential more fully are discussed and a useful statistical framework is adopted and performance results that elucidate the mean-square stability, convergence, and steady-state behavior of the learning networks are derived.
AliGraph: A Comprehensive Graph Neural Network Platform
TLDR
This paper presents a comprehensive graph neural network system, namely AliGraph, which consists of distributed graph storage, optimized sampling operators and runtime to efficiently support not only existing popular GNNs but also a series of in-house developed ones for different scenarios.
Spectral Networks and Locally Connected Networks on Graphs
TLDR
This paper considers possible generalizations of CNNs to signals defined on more general domains without the action of a translation group, and proposes two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian.
...
...