• Corpus ID: 235359140

SpreadGNN: Serverless Multi-task Federated Learning for Graph Neural Networks

@article{He2021SpreadGNNSM,
  title={SpreadGNN: Serverless Multi-task Federated Learning for Graph Neural Networks},
  author={Chaoyang He and Emir Ceyani and Keshav Balasubramanian and Murali Annavaram and Salman Avestimehr},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.02743}
}
Graph Neural Networks (GNNs) are the first choice methods for graph machine learning problems thanks to their ability to learn state-of-the-art level representations from graph-structured data. However, centralizing a massive amount of real-world graph data for GNN training is prohibitive due to user-side privacy concerns, regulation restrictions, and commercial competition. Federated Learning is the de-facto standard for collaborative training of machine learning models over many distributed… 

Figures and Tables from this paper

FedGCN: Convergence and Communication Tradeoffs in Federated Training of Graph Convolutional Networks
TLDR
Federated Graph Convolutional Network (FedGCN) is introduced, which uses federated learning to train GCN models for semi-supervised node classification on large graphs with optimized convergence rate and communication cost.
Federated Graph Neural Networks: Overview, Techniques and Challenges
TLDR
A unique 3-tiered taxonomy of the FedGNNs literature is proposed to provide a clear view into how GNNs work in the context of Federated Learning (FL), which puts existing works into perspective.
Federated Graph Machine Learning: A Survey of Concepts, Techniques, and Applications
TLDR
This survey conducts a comprehensive review of the literature in Federated Graph Machine Learning and provides a new taxonomy to divide the existing problems in FGML into two settings, namely, FL with structured data and structured FL .
Federated Graph Contrastive Learning
TLDR
This paper investigates how to implement differential privacy on graph edges and observes the performances decreasing in the experiments, and proposes to leverage the advantages of graph contrastive learning to alleviate the performance dropping caused by differential privacy.
Personalized Subgraph Federated Learning
TLDR
This work introduces a new sub graph FL problem, personalized subgraph FL, which focuses on the joint improvement of the interrelated local GNN models rather than learning a single global GNN model, and proposes a novel framework, FEDerated Personalized sUBgraph learning (FED-PUB), to tackle it.
More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks
TLDR
Two types of backdoor attacks in Federated GNNs are conducted: centralized backdoor attacks (CBA and distributed backdoor attacks) and the robustness of DBA and CBA against two state-of-the-art defenses is explored.
From Distributed Machine Learning to Federated Learning: A Survey
TLDR
This paper proposes a functional architecture of federated learning systems and a taxonomy of related techniques and presents four widely used federated systems based on the functional architecture.
Trustworthy Graph Neural Networks: Aspects, Methods and Trends
TLDR
A comprehensive roadmap to build trustworthy GNNs from the view of the various computing technologies involved is proposed, including robustness, explainability, privacy, fairness, accountability, and environmental well-being.
LightSecAgg: Rethinking Secure Aggregation in Federated Learning
TLDR
It is shown that LightSecAgg achieves the same privacy and dropout-resiliency guarantees as the state-of-the-art protocols while significantly reducing the overhead for resiliency against dropped users and can be applied to secure aggregation in the asynchronous FL setting.
Parallel and Distributed Graph Neural Networks: An In-Depth Concurrency Analysis
TLDR
A taxonomy of parallelism in GNNs is designed, considering data and model parallelism, and different forms of pipelining, and the outcomes are synthesized in a set of insights that help to maximize GNN performance, and a comprehensive list of challenges and opportunities for further research into GNN computations.
...
...

References

SHOWING 1-10 OF 77 REFERENCES
FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks
TLDR
FedGraphNN is an open research federated learning system and a benchmark to facilitate GNN-based FL research, built on a unified formulation of federated GNNs and supports commonly used datasets, GNN models, FL algorithms, and flexible APIs.
GraphFL: A Federated Learning Framework for Semi-Supervised Node Classification on Graphs
TLDR
This work proposes the first FL framework, namely GraphFL, for semi-supervised node classification on graphs and proposes two GraphFL methods to respectively address the non-IID issue in graph data and handle the tasks with new label domains, and designs a self-training method to leverage unlabeled graph data.
Privacy-Preserving Graph Neural Network for Node Classification
TLDR
This paper proposes a Privacy-Preserving GNN (PPGNN) learning paradigm for node classification task, which can be generalized to existing GNN models and demonstrates that PPGNN significantly outperforms theGNN models trained on the isolated data and has comparable performance with the traditional GNNtrained on the mixed plaintext data.
Cross-Node Federated Graph Neural Network for Spatio-Temporal Data Modeling
TLDR
A federated spatio-temporal model -- Cross-Node Federated Graph Neural Network (CNFGNN) -- which explicitly encodes the underlying graph structure using graph neural network (GNN)-based architecture under the constraint of cross-node federated learning, which requires that data in a network of nodes is generated locally on each node and remains decentralized.
Federated Multi-Task Learning
TLDR
This work shows that multi-task learning is naturally suited to handle the statistical challenges of this setting, and proposes a novel systems-aware optimization method, MOCHA, that is robust to practical systems issues.
SGNN: A Graph Neural Network Based Federated Learning Approach by Hiding Structure
TLDR
A similarity-based graph neural network model, SGNN, is proposed, which captures the structure information of nodes precisely in node classification tasks and takes advantage of the thought of federated learning to hide the original information from different data sources to protect users’ privacy.
Federated Learning: Strategies for Improving Communication Efficiency
TLDR
Two ways to reduce the uplink communication costs are proposed: structured updates, where the user directly learns an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, which learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling.
Adaptive Federated Optimization
TLDR
This work proposes federated versions of adaptive optimizers, including Adagrad, Adam, and Yogi, and analyzes their convergence in the presence of heterogeneous data for general nonconvex settings to highlight the interplay between client heterogeneity and communication efficiency.
Advances and Open Problems in Federated Learning
TLDR
Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.
Federated Optimization: Distributed Optimization Beyond the Datacenter
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are distributed (unevenly) over an extremely large
...
...