PersA-FL: Personalized Asynchronous Federated Learning

@article{Toghani2022PersAFLPA,
  title={PersA-FL: Personalized Asynchronous Federated Learning},
  author={Taha Toghani and Soomin Lee and C{\'e}sar A. Uribe},
  journal={ArXiv},
  year={2022},
  volume={abs/2210.01176}
}
We study the personalized federated learning problem under asynchronous updates. In this problem, each client seeks to obtain a personalized model that simultaneously outperforms local and global models. We consider two optimization-based frame-works for personalization: (i) Model-Agnostic Meta-Learning ( MAML ) and (ii) Moreau Envelope ( ME ). MAML involves learning a joint model adapted for each client through fine-tuning, whereas ME requires a bi-level optimization problem with implicit gra… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 73 REFERENCES

Personalized Federated Learning with Moreau Envelopes

This work proposes an algorithm for personalized FL (pFedMe) using Moreau envelopes as clients' regularized loss functions, which help decouple personalized model optimization from the global model learning in a bi-level problem stylized for personalizedFL.

Straggler-Resilient Personalized Federated Learning

Experimental results support the superiority of the method over alternative personalized federated schemes in system and data heterogeneous environments, and mitigates the effects of stragglers by adaptively selecting clients based on their computational characteristics and statistical significance.

Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach

A personalized variant of the well-known Federated Averaging algorithm is studied and its performance is characterized by how this performance is affected by the closeness of underlying distributions of user data, measured in terms of distribution distances such as Total Variation and 1-Wasserstein metric.

Adaptive Personalized Federated Learning

Information theoretically, it is proved that the mixture of local and global models can reduce the generalization error and a communication-reduced bilevel optimization method is proposed, which reduces the communication rounds to $O(\sqrt{T})$ and can achieve a convergence rate of $O(1/T)$ with some residual error.

Personalized Federated Learning with First Order Model Optimization

This work efficiently calculate optimal weighted model combinations for each client, based on figuring out how much a client can benefit from another's model, to achieve personalization in federated FL.

FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning

A new framework, FedMix, is introduced that takes into account the unique challenges brought by federated learning and enables practitioners to tap into the immense wealth of existing (potentially non-local) methods for distributed optimization.

Personalized Federated Learning using Hypernetworks

Since hypernetworks share information across clients, it is shown that pFedHN can generalize better to new clients whose distributions differ from any client observed during training, and decouples the communication cost from the trainable model size.

Federated Learning with Buffered Asynchronous Aggregation

This work proposes a model aggregation scheme, FedBuff, that com-bines the best properties of synchronous and asynchronous FL and shows that FedBuff is robust to different staleness distributions and is more scalable than synchronous FL.

SCAFFOLD: Stochastic Controlled Averaging for Federated Learning

This work obtains tight convergence rates for FedAvg and proves that it suffers from `client-drift' when the data is heterogeneous (non-iid), resulting in unstable and slow convergence, and proposes a new algorithm (SCAFFOLD) which uses control variates (variance reduction) to correct for the ` client-drifts' in its local updates.

Decentralized personalized federated learning: Lower bounds and optimal algorithm for all personalization modes

...