Conditional Supervised Contrastive Learning for Fair Text Classification

@article{Chi2022ConditionalSC,
  title={Conditional Supervised Contrastive Learning for Fair Text Classification},
  author={Jianfeng Chi and Will Shand and Yaodong Yu and Kai-Wei Chang and Han Zhao and Yuan Tian},
  journal={ArXiv},
  year={2022},
  volume={abs/2205.11485}
}
Contrastive representation learning has gained much attention due to its superior performance in learning representations from both image and sequential data. However, the learned representations could potentially lead to performance disparities in downstream tasks, such as increased silencing of underrepresented groups in toxicity comment classification. In light of this challenge, in this work, we study learning fair representations that satisfy a notion of fairness known as equalized odds for… 

Figures and Tables from this paper

MABEL: Attenuating Gender Bias using Textual Entailment Data

This work proposes MABEL (a Method for Attenuating Gender Bias using Entailment Labels), an intermediate pre-training approach for mitigating gender bias in contextualized representations, and introduces an alignment regularizer that pulls identical entailment pairs along opposite gender directions closer.