Corpus ID: 235899367

Decentralized and Personalized Federated Learning

@inproceedings{Sadiev2021DecentralizedAP,
  title={Decentralized and Personalized Federated Learning},
  author={Abdurakhmon Sadiev and Darina Dvinskikh and Aleksandr Beznosikov and Alexander V. Gasnikov},
  year={2021}
}
In this paper, we consider the personalized federated learning problem minimizing the average of strongly convex functions. We propose an approach which allows to solve the problem on a decentralized network by introducing a penalty function built upon a communication matrix of decentralized communications over a network and the application of the Sliding algorithm [10]. The practical efficiency of the proposed approach is supported by the numerical experiments. 

Figures from this paper

References

SHOWING 1-10 OF 15 REFERENCES
Decentralized Personalized Federated Min-Max Problems
TLDR
This paper is the first to study PFL for saddle point problems, which cover a broader class of optimization tasks and are thus of more relevance for applications than the minimization. Expand
Lower Bounds and Optimal Algorithms for Personalized Federated Learning
TLDR
This work establishes the first lower bounds for this formulation of personalized federated learning, for both the communication complexity and the local oracle complexity, and designs several optimal methods matching these lower bounds in almost all regimes. Expand
Survey of Personalization Techniques for Federated Learning
TLDR
The need for personalization is highlighted and recent research on this topic is surveyed and several techniques have been proposed to personalize global models to work better for individual clients. Expand
Federated Optimization: Distributed Machine Learning for On-Device Intelligence
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large numberExpand
Personalized Federated Learning: A Unified Framework and Universal Optimization Techniques
TLDR
A general personalized objective capable of recovering essentially any existing personalized FL objective as a special case is proposed and a universal optimization theory applicable to all convex personalized FL models in the literature is developed. Expand
Optimal Algorithms for Smooth and Strongly Convex Distributed Optimization in Networks
TLDR
The efficiency of MSDA against state-of-the-art methods for two problems: least-squares regression and classification by logistic regression is verified. Expand
Accelerated meta-algorithm for convex optimization
TLDR
The proposed meta-algorithm is more general than the ones in the literature and allows to obtain better convergence rates and practical performance in several settings and nearly optimal methods for minimizing smooth functions with Lipschitz derivatives of an arbitrary order. Expand
Communication-Efficient Learning of Deep Networks from Decentralized Data
TLDR
This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets. Expand
Universal gradient descent
In this book we collect many different and useful facts around gradient descent method. First of all we consider gradient descent with inexact oracle. We build a general model of optimized functionExpand
Gradient sliding for composite optimization
TLDR
If the smooth component in the composite function is strongly convex, the developed gradient sliding algorithms can significantly reduce the number of graduate and subgradient evaluations for the smooth and nonsmooth component to O(1/ϵ), respectively. Expand
...
1
2
...