• Corpus ID: 240288729

Improving Fairness via Federated Learning

@article{Zeng2021ImprovingFV,
  title={Improving Fairness via Federated Learning},
  author={Yuchen Zeng and Hongxu Chen and Kangwook Lee},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.15545}
}
Recently, lots of algorithms have been proposed for learning a fair classifier from decentralized data. However, many theoretical and algorithmic questions remain open. First, is federated learning necessary, i.e., can we simply train locally fair classifiers and aggregate them? In this work, we first propose a new theoretical framework, with which we demonstrate that federated learning can strictly boost model fairness compared with such non-federated algorithms. We then theoretically and… 

Provably Fair Federated Learning via Bounded Group Loss

This work provides a new definition for group fairness in Federated learning based on the notion of Bounded Group Loss (BGL), which can be easily applied to common federated learning objectives and proposes a scalable algorithm that optimizes the empirical risk and global fairness constraints.

FAIR-FATE: Fair Federated Learning with Momentum

Experimental results on four real-world datasets demonstrate that FAIR-FATE outperforms state-of-the-art fair Federated Learning algorithms under different levels of data heterogeneity.

Fair Federated Learning via Bounded Group Loss

This work explores and extends the notion of Bounded Group Loss as a theoretically-grounded approach for group fairness and proposes a scalable federated optimization method that optimizes the empirical risk under a number of group fairness constraints.

Fairness in Federated Learning via Core-Stability

This work models the task of learning a shared predictor in the federated setting as a fair public decision making problem, and proposes an efficient federated learning protocol CoreFed to optimize a core stable predictor and empirically validate the analysis on two real-world datasets.

Proportional Fair Clustered Federated Learning

A family of iterative algorithms that balance the learning performance and proportional fairness through cluster assignments as randomized functions of the learning losses is proposed and the trade-off induced by the algorithms between accuracy of cluster estimation and the introduced randomization level is characterized.

Models of fairness in federated learning

For egalitarian fairness, a tight multiplicative bound on how widely error rates can diverge between agents federating together is obtained and it is shown that sub-proportional error is guaranteed for any individually rational federating coalition.

FairFed: Enabling Group Fairness in Federated Learning

This work proposes FairFed, a novel algorithm for fairness-aware aggregation to enhance group fairness in federated learning, which is server-side and agnostic to the applied local debiasing thus allowing for different localdebiasing methods across clients.

An Auditing Framework for Analyzing Fairness of Spatial-Temporal Federated Learning Applications

A set of metrics to define individual fairness using spatial-temporal data is proposed and a set of approaches for measuring these metrics in distributed settings are introduced, as well as building a framework that can monitor the fairness of FL models dynamically.

Fairness-aware Federated Matrix Factorization

Empirical results show that federated learning may naturally improve user group fairness and the proposed framework can effectively control this fairness with low communication overheads.

Aggregation Techniques in Federated Learning: Comprehensive Survey, Challenges and Opportunities

  • Mukund Prasad SahAmritpal Singh
  • Computer Science
    2022 2nd International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE)
  • 2022
There can be a solution where taking the data to the model send the model to the data, that is the core concept of the Federated Learning paradigm - in this way the authors can preserve the privacy of the user and also train the model on the rich personalized data set.

References

SHOWING 1-10 OF 45 REFERENCES

Agnostic Federated Learning

This work proposes a new framework of agnostic federated learning, where the centralized model is optimized for any target distribution formed by a mixture of the client distributions, and shows that this framework naturally yields a notion of fairness.

Collaborative Fairness in Federated Learning

This work investigates the collaborative fairness in FL, and proposes a novel Collaborative Fair Federated Learning (CFFL) framework which utilizes reputation to enforce participants to converge to different models, thus achieving fairness without compromising the predictive performance.

Fairness-aware Agnostic Federated Learning

This paper develops a fairness-aware agnostic federated learning framework (AgnosticFair) to deal with the challenge of unknown testing distribution and is the first work to achieve fairness in Federated learning.

Ditto: Fair and Robust Federated Learning Through Personalization

This work identifies that robustness to data and model poisoning attacks and fairness, measured as the uniformity of performance across devices, are competing constraints in statistically heterogeneous networks and proposes a simple, general framework, Ditto, that can inherently provide fairness and robustness benefits.

Addressing Algorithmic Disparity and Performance Inconsistency in Federated Learning

This paper proposes an FL framework to jointly consider performance consistency and algorithmic fairness across different local clients (data sources) from a constrained multiobjective optimization perspective, in which a model satisfying fairness constraints on all clients with consistent performance is learned.

Fair Resource Allocation in Federated Learning

This work proposes q-Fair Federated Learning (q-FFL), a novel optimization objective inspired by fair resource allocation in wireless networks that encourages a more fair accuracy distribution across devices in federated networks.

FedFair: Training Fair Models In Cross-Silo Federated Learning

FedFair is developed, a well-designed federated learning framework, which can successfully train a fair model with high performance without any data privacy infringement, and extensive experiments on three real-world data sets demonstrate the excellent fair model training performance of the method.

FairFed: Enabling Group Fairness in Federated Learning

This work proposes FairFed, a novel algorithm for fairness-aware aggregation to enhance group fairness in federated learning, which is server-side and agnostic to the applied local debiasing thus allowing for different localdebiasing methods across clients.

Enforcing fairness in private federated learning via the modified method of differential multipliers

The paper extends the modified method of differential multipliers to empirical risk minimization with fairness constraints, thus providing an algorithm to enforce fairness in the central setting, and this algorithm is extended to the private federated learning setting.

Hierarchically Fair Federated Learning

A novel hierarchically fair federated learning (HFFL) framework is proposed, under which agents are rewarded in proportion to their pre-negotiated contribution levels and extends this to incorporate heterogeneous models.