Corpus ID: 235422538

Decentralized Personalized Federated Min-Max Problems

@article{Beznosikov2021DecentralizedPF,
  title={Decentralized Personalized Federated Min-Max Problems},
  author={Aleksandr Beznosikov and Vadim Sushko and Abdurakhmon Sadiev and Alexander V. Gasnikov},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.07289}
}
Personalized Federated Learning (PFL) has recently seen tremendous progress, allowing the design of novel machine learning applications to preserve the privacy of the training data. Existing theoretical results in this field mainly focus on distributed optimization for minimization problems. This paper is the first to study PFL for saddle point problems (which cover a broader class of optimization problems), allowing for a more rich class of applications requiring more than just solving… Expand

Figures and Tables from this paper

Decentralized and Personalized Federated Learning
In this paper, we consider the personalized federated learning problem minimizing the average of strongly convex functions. We propose an approach which allows to solve the problem on a decentralizedExpand

References

SHOWING 1-10 OF 30 REFERENCES
Stochastic Variance Reduction for Variational Inequality Methods
TLDR
Stochastic variance reduced algorithms for solving convex-concave saddle point problems, monotone variational inequalities, and monotones inclusions are proposed and they either match or improve the best-known complexities for solving structured min-max problems. Expand
Lower Bounds and Optimal Algorithms for Personalized Federated Learning
TLDR
This work establishes the first lower bounds for this formulation of personalized federated learning, for both the communication complexity and the local oracle complexity, and designs several optimal methods matching these lower bounds in almost all regimes. Expand
Solving variational inequalities with Stochastic Mirror-Prox algorithm
In this paper we consider iterative methods for stochastic variational inequalities (s.v.i.) with monotone operators. Our basic assumption is that the operator possesses both smooth and nonsmoothExpand
Korpelevich. The extragradient method for finding saddle points and other problems
  • 1976
Federated Learning of a Mixture of Global and Local Models
TLDR
This work proposes a new optimization formulation for training federated learning models that seeks an explicit trade-off between this traditional global model and the local models, which can be learned by each device from its own private data without any communication. Expand
Distributed Stochastic Multi-Task Learning with Graph Regularization
TLDR
It is shown how simply skewing the averaging weights or controlling the stepsize allows learning different, but related, tasks on the different machines. Expand
Federated Multi-Task Learning
TLDR
This work shows that multi-task learning is naturally suited to handle the statistical challenges of this setting, and proposes a novel systems-aware optimization method, MOCHA, that is robust to practical systems issues. Expand
Advances and Open Problems in Federated Learning
TLDR
Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges. Expand
Decentralized Distributed Optimization for Saddle Point Problems
TLDR
This work considers distributed convex-concave saddle point problems over arbitrary connected undirected networks and proposes a decentralized distributed algorithm for their solution, and proves non-asymptotic convergence rate estimates with explicit dependence on the network characteristics. Expand
Decentralized Accelerated Gradient Methods With Increasing Penalty Parameters
TLDR
This article presents two algorithms based on the framework of the accelerated penalty method with increasing penalty parameters that obtains the near optimal communication complexity, and the optimal gradient computation complexity for nonsmooth distributed optimization. Expand
...
1
2
3
...