• Corpus ID: 216553597

SplitFed: When Federated Learning Meets Split Learning

@article{Thapa2020SplitFedWF,
  title={SplitFed: When Federated Learning Meets Split Learning},
  author={Chandra Thapa and Pathum Chamikara Mahawaga Arachchige and Seyit Ahmet Çamtepe},
  journal={ArXiv},
  year={2020},
  volume={abs/2004.12088}
}
Federated learning (FL) and split learning (SL) are two recent distributed machine learning (ML) approaches that have gained attention due to their inherent privacy-preserving capabilities. Both approaches follow a model-to-data scenario, in that an ML model is sent to clients for network training and testing. However, FL and SL show contrasting strengths and weaknesses. For example, while FL performs faster than SL due to its parallel client-side model generation strategy, SL provides better… 
Advancements of federated learning towards privacy preservation: from federated learning to split learning
TLDR
This chapter is designed to provide extensive coverage in SL and its variants, including fundamentals, existing findings, integration with privacy measures such as differential privacy, open problems, and code implementation.
End-to-End Evaluation of Federated Learning and Split Learning for Internet of Things
TLDR
This work is the first attempt to provide empirical comparisons of FL and SplitNN in real-world IoT settings in terms of learning performance and device implementation overhead and demonstrates that neither FL or SplitNN can be applied to a heavy model, e.g., with several million parameters, on resource-constrained IoT devices because its training cost would be too expensive for such devices.
Vulnerability Due to Training Order in Split Learning
TLDR
It is demonstrated that the model trained using the data of all clients does not perform well on the client’s data which was considered earliest in a round for training the model, and the SplitFedv3 algorithm mitigates this problem while still leveraging the privacy benefits provided by split learning.
Decentralised Learning in Federated Deployment Environments
TLDR
This survey provides a detailed and up-to-date overview of the most recent contributions available in the state-of-the-art decentralised learning literature, including solutions for privacy, communication efficiency, non-IIDness, device heterogeneity, and poisoning defense.
Comparison of Privacy-Preserving Distributed Deep Learning Methods in Healthcare
TLDR
This paper uses federated learning, split learning, and SplitFed to develop binary classification models for detecting tuberculosis from chest X-rays and compares them in terms of classification performance, communication and computational costs, and training time.
PyVertical: A Vertical Federated Learning Framework for Multi-headed SplitNN
TLDR
PyVertical, a framework supporting vertical federated learning using split neural networks, is introduced and the training of a simple dual-headed split neural network for a MNIST classification task is presented.
Federated Learning-based Active Authentication on Mobile Devices
TLDR
This work proposes a novel method that is able to tackle heterogeneous/non-IID distribution of data in FAA and shows that such approach performs better than state-of-the-art one-class based FAA methods and is also able to outperform traditional FL/SL methods.
NoPeek: Information leakage reduction to share activations in distributed deep learning
TLDR
This work demonstrates how minimizing distance correlation between raw data and intermediary representations reduces leakage of sensitive raw data patterns across client communications while maintaining model accuracy in distributed deep learning services.
AdaSplit: Adaptive Trade-offs for Resource-constrained Distributed Deep Learning
TLDR
AdaSplit is introduced which enables efficiently scaling SL to low resource scenarios by reducing bandwidth consumption and improving performance across heterogeneous clients and C3-Score, a metric to evaluate performance under resource budgets is introduced.
Accelerating Federated Learning with Split Learning on Locally Generated Losses
Federated learning (FL) operates based on model exchanges between the server and the clients, and suffers from significant communication as well as client-side computation burden. While emerging
...
1
2
3
4
...

References

SHOWING 1-10 OF 46 REFERENCES
End-to-End Evaluation of Federated Learning and Split Learning for Internet of Things
TLDR
This work is the first attempt to provide empirical comparisons of FL and SplitNN in real-world IoT settings in terms of learning performance and device implementation overhead and demonstrates that neither FL or SplitNN can be applied to a heavy model, e.g., with several million parameters, on resource-constrained IoT devices because its training cost would be too expensive for such devices.
Detailed comparison of communication efficiency of split learning and federated learning
TLDR
This work considers various practical scenarios of distributed learning setup and juxtapose the two methods under various real-life scenarios and shows useful settings under which each method outperforms the other in terms of communication efficiency.
Federated Optimization: Distributed Optimization Beyond the Datacenter
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are distributed (unevenly) over an extremely large
Split learning for health: Distributed deep learning without sharing raw patient data
TLDR
This paper compares performance and resource efficiency trade-offs of splitNN and other distributed deep learning methods like federated learning, large batch synchronous stochastic gradient descent and show highly encouraging results for splitNN.
Can We Use Split Learning on 1D CNN Models for Privacy Preserving Training?
TLDR
It is still unclear whether the split learning can be applied to other deep learning models, in particular, 1D CNN, but it is believed to be a promising approach to protect the client's raw data.
Communication-Efficient Learning of Deep Networks from Decentralized Data
TLDR
This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.
Towards Federated Learning at Scale: System Design
TLDR
A scalable production system for Federated Learning in the domain of mobile devices, based on TensorFlow is built, describing the resulting high-level design, and sketch some of the challenges and their solutions.
Distributed learning of deep neural network over multiple agents
TLDR
This work proposes a new technique to train deep neural networks over several data sources in a distributed fashion, which paves the way for distributed training ofDeep neural networks in data sensitive applications when raw data may not be shared directly.
DIANNE: a modular framework for designing, training and deploying deep neural networks on heterogeneous distributed infrastructure
TLDR
The DIANNE framework is proposed as an all-in-one solution for deep learning, enabling data and model parallelism though a modular design, offloading to local compute power, and the ability to abstract between simulation and real environment.
Local Differential Privacy for Deep Learning
TLDR
A new local differentially private (LDP) algorithm named LATENT is proposed that redesigns the training process and enables a data owner to add a randomization layer before data leave the data owners’ devices and reach a potentially untrusted machine learning service.
...
1
2
3
4
5
...