Certifying Some Distributional Fairness with Subpopulation Decomposition

@article{Kang2022CertifyingSD,
  title={Certifying Some Distributional Fairness with Subpopulation Decomposition},
  author={Mintong Kang and Linyi Li and Maurice Weber and Yang Liu and Ce Zhang and Bo Li},
  journal={ArXiv},
  year={2022},
  volume={abs/2205.15494}
}
Extensive efforts have been made to understand and improve the fairness of machine learning models based on different fairness measurement metrics, especially in high-stakes domains such as medical insurance, education, and hiring decisions. However, there is a lack of certified fairness on the end-to-end performance of an ML model. In this paper, we first formulate the certified fairness of an ML model trained on a given data distribution as an optimization problem based on the model performance… 

FARE: P ROVABLY F AIR R EPRESENTATION L EARNING

  • Computer Science
  • 2022
This work proposes Fairness with Restricted Encoders (FARE), the first FRL method with provable fairness guarantees and develops and applies a practical statistical procedure that computes a high-confidence upper bound on the unfairness of any downstream classifier.

FARE: Provably Fair Representation Learning

This work proposes Fairness with Restricted Encoders (FARE), the first FRL method with provable fairness guarantees, and develops and applies a practical procedure that computes a high-confidence upper bound on the unfairness of any downstream classi fier.

SoK: Certified Robustness for Deep Neural Networks

This paper provides a taxonomy for the robustness verification and training approaches, and provides an open-sourced unified platform to evaluate 20+ representative verification and corresponding robust training approaches on a wide range of DNNs.

References

SHOWING 1-10 OF 54 REFERENCES

Group Fairness by Probabilistic Modeling with Latent Fair Decisions

This paper studies learning fair probability distributions from biased data by explicitly modeling a latent variable that represents a hidden, unbiased label and aims to achieve demographic parity by enforcing certain independencies in the learned model.

Verifying Individual Fairness in Machine Learning Models

The objective is to construct verifiers for proving individual fairness of a given model, and this work constructs verifiers which are sound but not complete for linear classifiers, and kernelized polynomial/radial basis function classifiers.

Fairness Transferability Subject to Bounded Distribution Shift

A framework for bounding violations of statistical fairness subject to distribution shift is developed, formulating a generic upper bound for transferred fairness violations, and it is shown that fairness violation bounds in practice are able to be estimated in practice.

Learning Fair and Transferable Representations with Theoretical Guarantees

This work argues that the goal of imposing demographic parity can be substantially facilitated within a multi-task learning setting and derives learning bounds establishing that the learned representation transfers well to novel tasks both in terms of prediction performance and fairness metrics.

Does enforcing fairness mitigate biases caused by subpopulation shift?

This paper derives necessary and sufficient conditions under which enforcing algorithmic fairness leads to the Bayes model in the target domain and illustrates the practical implications of the theoretical results in simulations and on real data.

Individual Fairness Revisited: Transferring Techniques from Adversarial Robustness

The definition of a minimal metric is introduced and the behavior of models in terms of minimal metrics are characterized, showing that adapting the minimal metrics of linear models to more complicated neural networks can lead to meaningful and interpretable fairness guarantees at little cost to utility.

Learning Controllable Fair Representations

Exploiting duality, this work introduces a method that optimizes the model parameters as well as the expressiveness-fairness trade-off and achieves higher expressiveness at a lower computational cost.

Learning Certified Individually Fair Representations

This work introduces the first method which generalizes individual fairness to rich similarity notions via logical constraints while also enabling data consumers to obtain fairness certificates for their models through representation learning.

Sample Selection for Fair and Robust Training

This work forms a combinatorial optimization problem for the unbiased selection of samples in the presence of data corruption, and proposes a greedy algorithm that is efficient and effective in practice.

Learning Fair Representations

We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the
...