• Corpus ID: 240288729

Improving Fairness via Federated Learning

  title={Improving Fairness via Federated Learning},
  author={Yuchen Zeng and Hongxu Chen and Kangwook Lee},
Recently, lots of algorithms have been proposed for learning a fair classifier from centralized data. However, how to privately train a fair classifier on decentralized data has not been fully studied yet. In this work, we first propose a new theoretical framework, with which we analyze the value of federated learning in improving fairness. Our analysis reveals that federated learning can strictly boost model fairness compared with all non-federated algorithms. We then theoretically and… 
Models of fairness in federated learning
For egalitarian fairness, a tight multiplicative bound on how widely error rates can diverge between agents federating together is obtained and it is shown that sub-proportional error is guaranteed for any individually rational federating coalition.
Federated Gaussian Process: Convergence, Automatic Personalization and Multi-fidelity Modeling
Through extensive case studies, it is shown that FGPR excels in a wide range of applications and is a promising approach for privacy-preserving multi-fidelity data modeling.
The Internet of Federated Things (IoFT): A Vision for the Future and In-depth Survey of Data-driven Approaches for Federated Learning
The defining characteristics of IoFT are introduced and FL data-driven approaches, opportunities, and challenges that allow decentralized inference within three dimensions: a global model that maximizes utility across all IoT devices, a personalized model that borrows strengths across all devices yet retains its own model, and a meta-learning model that quickly adapts to new devices or learning tasks.
Minimax Demographic Group Fairness in Federated Learning
Federated learning is an increasingly popular paradigm that enables a large number of entities to collaboratively learn better models. In this work, we study minimax group fairness in federated


Fairness-aware Agnostic Federated Learning
This paper develops a fairness-aware agnostic federated learning framework (AgnosticFair) to deal with the challenge of unknown testing distribution and is the first work to achieve fairness in Federated learning.
Collaborative Fairness in Federated Learning
  • L. Lyu, Xinyi Xu, Qian Wang
  • Computer Science, Mathematics
    Federated Learning
  • 2020
This work investigates the collaborative fairness in FL, and proposes a novel Collaborative Fair Federated Learning (CFFL) framework which utilizes reputation to enforce participants to converge to different models, thus achieving fairness without compromising the predictive performance.
Fair Resource Allocation in Federated Learning
This work proposes q-Fair Federated Learning (q-FFL), a novel optimization objective inspired by fair resource allocation in wireless networks that encourages a more fair accuracy distribution across devices in federated networks.
Agnostic Federated Learning
This work proposes a new framework of agnostic federated learning, where the centralized model is optimized for any target distribution formed by a mixture of the client distributions, and shows that this framework naturally yields a notion of fairness.
Enforcing fairness in private federated learning via the modified method of differential multipliers
An algorithm to enforce group fairness in private federated learning, where users’ data does not leave their devices is introduced, and the proposed algorithm, FPFL, is tested on a federated version of the Adult dataset and an “unfair” versions of the FEMNIST dataset.
Hierarchically Fair Federated Learning
A novel hierarchically fair federated learning (HFFL) framework is proposed, under which agents are rewarded in proportion to their pre-negotiated contribution levels and extends this to incorporate heterogeneous models.
FedFair: Training Fair Models In Cross-Silo Federated Learning
FedFair is developed, a well-designed federated learning framework, which can successfully train a fair model with high performance without any data privacy infringement, and extensive experiments on three real-world data sets demonstrate the excellent fair model training performance of the method.
FairFL: A Fair Federated Learning Approach to Reducing Demographic Bias in Privacy-Sensitive Classification Models
  • D. Zhang, Ziyi Kou, Dong Wang
  • Computer Science
    2020 IEEE International Conference on Big Data (Big Data)
  • 2020
FairFL is developed, a fair federated learning framework dedicated to reducing the bias in privacy-sensitive ML applications that consists of a principled deep multi-agent reinforcement learning framework and a secure information aggregation protocol that optimizes both the accuracy and the fairness of the learned model while respecting the strict privacy constraints of the clients.
GIFAIR-FL: An Approach for Group and Individual Fairness in Federated Learning
This paper proposes GIFAIR-FL: an approach that imposes group and individual fairness to federated learning settings by adding a regularization term and shows improved fairness results while superior or similar prediction accuracy.
Learning Fair Representations
We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the