Rethinking Architecture Design for Tackling Data Heterogeneity in Federated Learning

@article{Qu2022RethinkingAD,
  title={Rethinking Architecture Design for Tackling Data Heterogeneity in Federated Learning},
  author={Liangqiong Qu and Yuyin Zhou and Paul Pu Liang and Yingda Xia and Feifei Wang and Li Fei-Fei and Ehsan Adeli and Daniel L. Rubin},
  journal={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022},
  pages={10051-10061}
}
  • Liangqiong QuYuyin Zhou D. Rubin
  • Published 10 June 2021
  • Computer Science
  • 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Federated learning is an emerging research paradigm enabling collaborative training of machine learning models among different organizations while keeping data private at each institution. Despite recent progress, there remain fundamental challenges such as the lack of convergence and the potential for catastrophic forgetting across real-world heterogeneous devices. In this paper, we demonstrate that self-attention-based architectures (e.g., Transformers) are more robust to distribution shifts… 

FedTune: A Deep Dive into Efficient Federated Fine-Tuning with Pre-trained Transformers

It is demonstrated that the fine-tuned Transformers achieve extraordinary performance on FL, and that the lightweight tuning method facilitates a fast convergence rate and low communication costs.

Applications of Federated Learning; Taxonomy, Challenges, and Research Trends

The areas of medical AI, IoT, edge systems, and the autonomous industry can adapt the FL in many of its sub-domains; however, the challenges these domains can encounter are statistical heterogeneity, system heterogeneity, data imbalance, resource allocation, and privacy.

Federated Learning from Pre-Trained Models: A Contrastive Learning Approach

A Fed erated P rototype-wise C ontrastive L earning (FedPCL) approach which shares knowledge across clients through their class prototypes and builds client-specific representations in a prototype-wise contrastive manner and measures its ability to fuse various pre-trained models on popular FL datasets.

FedTP: Federated Learning by Transformer Personalization

FedTP, a novel Transformer- based federated learning framework that learns personalized self-attention for each client while aggregating the other parameters among the clients is proposed and a learn-to-personalize mechanism is developed to further encourage the cooperation among clients and to increase the scablability and generalization of FedTP.

On the Importance and Applicability of Pre-Training for Federated Learning

It is found that pre-training enables the learned global models under different clients’ data conditions to converge to the same loss basin, and makes global aggregation in FL more stable.

Label-Efficient Self-Supervised Federated Learning for Tackling Data Heterogeneity in Medical Imaging

This paper presents a robust and label-efficient self-supervised FL framework for medical image analysis and introduces a novel distributed self- supervised pre-training paradigm into the existing FL pipeline, which generalizes well to out-of-distribution data and learns federated models more effectively in limited label scenarios.

Federated Adversarial Training with Transformers

This paper proposes an extension to FedAvg aggregation method, called FedWAvg, which improves the robust accuracy of the models with the not independent and identically distributed (Non-IID) data while preserving its privacy and investigates such feasibility with different federated model aggregation methods and different vision transformer models with different tokenization and classi-cation head techniques.

On Pre-Training for Federated Learning

It is found that pretraining does largely close the gap between F ED A VG and centralized learning under non-IID data, but this does not come from alleviating the well-known model drifting problem in FED A VG ’s local training.

PromptFL: Let Federated Participants Cooperatively Learn Prompts Instead of Models -- Federated Learning in Age of Foundation Model

A brand-new FL framework is proposed, P ROMPT FL, that replaces the federated model training with the Federated prompt training, i.e., let federated participants train prompts instead of a shared model to simultaneously achieve the efficient global aggregation and local training on insufflcient data by exploiting the power of foundation mod- els (FM) in a distributed way.

References

SHOWING 1-10 OF 75 REFERENCES

Federated Learning with Personalization Layers

FedPer, a base + personalization layer approach for federated training of deep feedforward neural networks, which can combat the ill-effects of statistical heterogeneity is proposed.

LEAF: A Benchmark for Federated Settings

LEAF is proposed, a modular benchmarking framework for learning in federated settings that includes a suite of open-source federated datasets, a rigorous evaluation framework, and a set of reference implementations, all geared towards capturing the obstacles and intricacies of practical federated environments.

FedScale: Benchmarking Model and System Performance of Federated Learning

FedScale is a federated learning benchmarking suite with realistic datasets and a scalable runtime to enable reproducible FL research and highlight potential opportunities for heterogeneity-aware co-optimizations in FL.

Federated Learning with Matched Averaging

This work proposes Federated matched averaging (FedMA) algorithm designed for federated learning of modern neural network architectures e.g. convolutional neural networks (CNNs) and LSTMs and indicates that FedMA outperforms popular state-of-the-art federatedLearning algorithms on deep CNN and L STM architectures trained on real world datasets, while improving the communication efficiency.

Federated Learning with Non-IID Data

This work presents a strategy to improve training on non-IID data by creating a small subset of data which is globally shared between all the edge devices, and shows that accuracy can be increased by 30% for the CIFAR-10 dataset with only 5% globally shared data.

Federated Optimization in Heterogeneous Networks

This work introduces a framework, FedProx, to tackle heterogeneity in federated networks, and provides convergence guarantees for this framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work.

Advances and Open Problems in Federated Learning

Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.

Federated Learning for Non-IID Data via Unified Feature Learning and Optimization Objective Alignment

A Unified Feature learning and Optimization objectives alignment method (FedUFO) is proposed to enable more reasonable and balanced model performance among different clients and can outperform the state-of-the-art approaches including the competitive one data-sharing method.

Ensemble Distillation for Robust Model Fusion in Federated Learning

This work proposes ensemble distillation for model fusion, i.e. training the central classifier through unlabeled data on the outputs of the models from the clients, which allows flexible aggregation over heterogeneous client models that can differ e.g. in size, numerical precision or structure.

Think Locally, Act Globally: Federated Learning with Local and Global Representations

A new federated learning algorithm is proposed that jointly learns compact local representations on each device and a global model across all devices, which helps to keep device data private and enable communication-efficient training while retaining performance.
...