Fairness in Federated Learning via Core-Stability

  title={Fairness in Federated Learning via Core-Stability},
  author={Bhaskar Ray Chaudhury and Linyi Li and Mintong Kang and Bo Li and Ruta Mehta},
Federated learning provides an effective paradigm to jointly optimize a model ben-efited from rich distributed data while protecting data privacy. Nonetheless, the heterogeneity nature of distributed data, especially in the non-IID setting, makes it challenging to define and ensure fairness among local agents. For instance, it is intuitively “unfair" for agents with data of high quality to sacrifice their performance due to other agents with low quality data. Currently popular egalitarian and… 

Tables from this paper



Models of fairness in federated learning

For egalitarian fairness, a tight multiplicative bound on how widely error rates can diverge between agents federating together is obtained and it is shown that sub-proportional error is guaranteed for any individually rational federating coalition.

Improving Fairness via Federated Learning

A new theoretical framework is proposed, with which it is demonstrated that federated learning can strictly boost model fairness compared with such non-federated algorithms, and it is shown that the performance tradeoff of F ED A VG -based fair learning algorithms is strictly worse than that of a fair classifier trained on centralized data.

Optimality and Stability in Federated Learning: A Game-theoretic Approach

This work motivates and proves a notion of optimality given by the average error rates among federating agents (players), and gives the first constant-factor bound on the performance gap between stability and optimality.

Federating for Learning Group Fair Models

This work study minmax group fairness in paradigms where different participating entities may only have access to a subset of the population groups during the training phase, and provides an optimization algorithm for solving the proposed problem that provably enjoys the performance guarantees of centralized learning algorithms.

Fairness-aware Agnostic Federated Learning

This paper develops a fairness-aware agnostic federated learning framework (AgnosticFair) to deal with the challenge of unknown testing distribution and is the first work to achieve fairness in Federated learning.

Fairness and accuracy in horizontal federated learning

An Efficiency-Boosting Client Selection Scheme for Federated Learning With Fairness Guarantee

An estimation of the model exchange time between each client and the server is proposed, based on which a fairness guaranteed algorithm termed RBCS-F for problem-solving is designed.

Towards Building a Robust and Fair Federated Learning System

This work proposes a novel Robust and Fair Federated Learning (RFFL) framework which utilizes reputation scores to address both issues, thus ensuring the high-contributing participants are rewarded with high-performing models while the low- or non-contributioning participants can be detected and removed.

Fair Resource Allocation in Federated Learning

This work proposes q-Fair Federated Learning (q-FFL), a novel optimization objective inspired by fair resource allocation in wireless networks that encourages a more fair accuracy distribution across devices in federated networks.

Agnostic Federated Learning

This work proposes a new framework of agnostic federated learning, where the centralized model is optimized for any target distribution formed by a mixture of the client distributions, and shows that this framework naturally yields a notion of fairness.