Decentralized Personalized Federated Learning: Lower Bounds and Optimal Algorithm for All Personalization Modes

@article{Sadiev2021DecentralizedPF,
  title={Decentralized Personalized Federated Learning: Lower Bounds and Optimal Algorithm for All Personalization Modes},
  author={Abdurakhmon Sadiev and Ekaterina Borodich and Aleksandr Beznosikov and Darina Dvinskikh and Saveliy Chezhegov and Rachael Tappenden and Martin Tak{\'a}c and Alexander V. Gasnikov},
  journal={EURO Journal on Computational Optimization},
  year={2021}
}

A Damped Newton Method Achieves Global $O\left(\frac{1}{k^2}\right)$ and Local Quadratic Convergence Rate

The first stepsize schedule for Newton method is presented resulting in fast global and local convergence guarantees, and a local quadratic rate is proved, which matches the best-known local rate of second-order methods.

PersA-FL: Personalized Asynchronous Federated Learning

This work focuses on improving the scalability of personalized federated learning by removing the synchronous commu- nication assumption and extends the studied function class by removing boundedness assumptions on the gradient norm.

References

SHOWING 1-10 OF 27 REFERENCES

Lower Bounds and Optimal Algorithms for Personalized Federated Learning

This work establishes the first lower bounds for this formulation of personalized federated learning, for both the communication complexity and the local oracle complexity, and designs several optimal methods matching these lower bounds in almost all regimes.

Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems

The ASVRCD method can deal with problems that include a non-separable and non-smooth regularizer, while accessing a random block of partial derivatives in each iteration only, and incorporates Nesterov's momentum, which offers favorable iteration complexity guarantees over both SEGA and SVRCD.

Accelerated meta-algorithm for convex optimization

The proposed meta-algorithm is more general than the ones in the literature and allows to obtain better convergence rates and practical performance in several settings and nearly optimal methods for minimizing smooth functions with Lipschitz derivatives of an arbitrary order.

Federated Learning of a Mixture of Global and Local Models

This work proposes a new optimization formulation for training federated learning models that seeks an explicit trade-off between this traditional global model and the local models, which can be learned by each device from its own private data without any communication.

LIBSVM: A library for support vector machines

Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.

A Field Guide to Federated Optimization

This paper provides recommendations and guidelines on formulating, designing, evaluating and analyzing federated optimization algorithms through concrete examples and practical implementation, with a focus on conducting effective simulations to infer real-world performance.

Decentralized Personalized Federated Min-Max Problems

This paper is the first to study PFL for saddle point problems (which cover a broader class of optimization problems), allowing for a more rich class of applications requiring more than just solving minimization problems.

On Accelerated Methods for Saddle-Point Problems with Composite Structure

This work considers strongly-convex-strongly-concave saddle-point problems with general non-bilinear objective and different condition numbers with respect to the primal and the dual variables and proposes a variance reduction algorithm with complexity estimates superior to the existing bounds in the literature.

Personalized Federated Learning: A Unified Framework and Universal Optimization Techniques

A general personalized objective capable of recovering essentially any existing personalized FL objective as a special case is proposed and a universal optimization theory applicable to all convex personalized FL models in the literature is developed.

Decentralized Accelerated Gradient Methods With Increasing Penalty Parameters

This article presents two algorithms based on the framework of the accelerated penalty method with increasing penalty parameters that obtains the near optimal communication complexity, and the optimal gradient computation complexity for nonsmooth distributed optimization.