• Corpus ID: 245836858

Information-Theoretic Bias Reduction via Causal View of Spurious Correlation

@article{Seo2022InformationTheoreticBR,
  title={Information-Theoretic Bias Reduction via Causal View of Spurious Correlation},
  author={Seonguk Seo and Joon-Young Lee and Bohyung Han},
  journal={ArXiv},
  year={2022},
  volume={abs/2201.03121}
}
We propose an information-theoretic bias measurement technique through a causal interpretation of spurious correlation, which is effective to identify the feature-level algorithmic bias by taking advantage of conditional mutual information. Although several bias measurement methods have been proposed and widely investigated to achieve algorithmic fairness in various tasks such as face recognition, their accuracyor logit-based metrics are susceptible to leading to trivial prediction score… 

Figures and Tables from this paper

Unsupervised Learning of Debiased Representations with Pseudo-Attributes
TLDR
This work first identifies pseudo-attributes based on the results from clustering performed in the feature embedding space even without an explicit bias attribute supervision, then employs a novel cluster-wise reweighting scheme to learn debiased representation.

References

SHOWING 1-10 OF 52 REFERENCES
Learning Unbiased Representations via Mutual Information Backpropagation
TLDR
A novel end-to-end optimization strategy is proposed, which simultaneously estimates and minimizes the mutual information between the learned representation and the data attributes, and is applicable to the problem of ``algorithmic fairness'', with competitive results.
Towards Fairness in Visual Recognition: Effective Strategies for Bias Mitigation
TLDR
A simple but surprisingly effective visual recognition benchmark for studying bias mitigation, and a simple but similarly effective alternative to the inference-time Reducing Bias Amplification method of Zhao et al., and design a domain-independent training technique that outperforms all other methods.
Fairness-Aware Classifier with Prejudice Remover Regularizer
TLDR
A regularization approach is proposed that is applicable to any prediction algorithm with probabilistic discriminative models and applied to logistic regression and empirically show its effectiveness and efficiency.
Data-Efficient Mutual Information Neural Estimator
TLDR
This work proposes a Data-Efficient MINE Estimator (DEMINE), by developing a relaxed predictive MI lower bound that can be estimated at higher data efficiency by orders of magnitudes and enables a new meta-learning approach using task augmentation, Meta-DEMINE, to improve generalization of the network and further boost estimation accuracy empirically.
REPAIR: Removing Representation Bias by Dataset Resampling
  • Yi Li, N. Vasconcelos
  • Computer Science, Environmental Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
Experiments with synthetic and action recognition data show that dataset REPAIR can significantly reduce representation bias, and lead to improved generalization of models trained on REPAired datasets.
Mutual Information Neural Estimation
TLDR
A Mutual Information Neural Estimator (MINE) is presented that is linearly scalable in dimensionality as well as in sample size, trainable through back-prop, and strongly consistent, and applied to improve adversarially trained generative models.
On Variational Bounds of Mutual Information
TLDR
This work introduces a continuum of lower bounds that encompasses previous bounds and flexibly trades off bias and variance and demonstrates the effectiveness of these new bounds for estimation and representation learning.
Towards Accuracy-Fairness Paradox: Adversarial Example-based Data Augmentation for Visual Debiasing
TLDR
To ensure the adversarial generalization as well as cross-task transferability, this paper proposes to couple the operations of target task classifier training, bias task classifiers training, and adversarial example generation to supplement the target task training dataset via balancing the distribution over bias variables in an online fashion.
Mitigating Unwanted Biases with Adversarial Learning
TLDR
This work presents a framework for mitigating biases concerning demographic groups by including a variable for the group of interest and simultaneously learning a predictor and an adversary, which results in accurate predictions that exhibit less evidence of stereotyping Z.
Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints
TLDR
This work proposes to inject corpus-level constraints for calibrating existing structured prediction models and design an algorithm based on Lagrangian relaxation for collective inference to reduce the magnitude of bias amplification in multilabel object classification and visual semantic role labeling.
...
1
2
3
4
5
...