Fair Ranking with Noisy Protected Attributes
@article{Mehrotra2022FairRW, title={Fair Ranking with Noisy Protected Attributes}, author={Anay Mehrotra and Nisheeth K. Vishnoi}, journal={ArXiv}, year={2022}, volume={abs/2211.17067} }
The fair-ranking problem, which asks to rank a given set of items to maximize utility subject to group fairness constraints, has received attention in the fairness, information retrieval, and machine learning literature. Recent works, however, observe that errors in socially-salient (including protected) attributes of items can significantly undermine fairness guarantees of existing fair-ranking algorithms and raise the problem of mitigating the effect of such errors. We study the fairranking…
Figures from this paper
2 Citations
Retiring $\Delta$DP: New Distribution-Level Metrics for Demographic Parity
- Computer Science
- 2023
Two new fairness metrics are proposed, A rea B etween P robability density function C urves ( ABPC ) and a rea A etween C umulative density functionC uraches ( ABCC), to precisely measure the violation of demographic parity in distribution level.
Mitigating Algorithmic Bias with Limited Annotations
- Computer ScienceArXiv
- 2022
According to the evaluation on five benchmark datasets, APOD outperforms the state-of-the-arts baseline methods under the limited annotation budget, and shows comparable performance to fully annotated bias mitigation, which demonstrates that APOD could benefit real-world applications when sensitive information is limited.
References
SHOWING 1-10 OF 78 REFERENCES
Policy Learning for Fairness in Ranking
- Computer ScienceNeurIPS
- 2019
This work proposes a general LTR framework that can optimize a wide range of utility metrics while satisfying fairness of exposure constraints with respect to the items, and provides a new LTR algorithm called Fair-PG-Rank for directly searching the space of fair ranking policies via a policy-gradient approach.
Fairness for Robust Learning to Rank
- Computer ScienceArXiv
- 2021
This work derives a new ranking system based on the first principles of distributional robustness that provides better utility for highly fair rankings than existing baseline methods.
On the Problem of Underranking in Group-Fair Ranking
- Computer ScienceICML
- 2021
A fair ranking algorithm is given that takes any given ranking and outputs another ranking with simultaneous under ranking and group fairness guarantees comparable to the lower bound on the tradeoff achievable for simultaneous underranking and groupfair in ranking.
Ranking with Fairness Constraints
- Computer ScienceICALP
- 2018
This work studies the following variant of the traditional ranking problem when the objective satisfies properties that appear in common ranking metrics such as Discounted Cumulative Gain, Spearman's rho or Bradley-Terry.
Fairness in Ranking, Part I: Score-Based Ranking
- Computer ScienceACM Comput. Surv.
- 2023
A systematic overview of fairness requirements into algorithmic rankers is given, offering a broad perspective that connects formalizations and algorithmic approaches across sub-fields, and develops a common narrative around the value frameworks that motivate specific fairness-enhancing interventions in ranking.
Controlling Fairness and Bias in Dynamic Learning-to-Rank
- Computer ScienceSIGIR
- 2020
This work proposes a learning algorithm that ensures notions of amortized group fairness, while simultaneously learning the ranking function from implicit feedback data, and finds empirically that the algorithm is highly practical and robust.
Fairness in Ranking, Part II: Learning-to-Rank and Recommender Systems
- Computer ScienceACM Comput. Surv.
- 2023
A systematic overview of fairness requirements into algorithmic rankers is given, offering a broad perspective that connects formalizations and algorithmic approaches across subfields, and develops a common narrative around the value frameworks that motivate specific fairness-enhancing interventions in ranking.
Measuring Fairness in Ranked Outputs
- Computer Science, EconomicsSSDBM
- 2017
A data generation procedure is developed that allows for systematically control the degree of unfairness in the output, and the proposed fairness measures for ranked outputs are applied to several real datasets, and results show potential for improving fairness of ranked outputs while maintaining accuracy.
Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search
- Computer ScienceKDD
- 2019
This work presents a framework for quantifying and mitigating algorithmic bias in mechanisms designed for ranking individuals, typically used as part of web-scale search and recommendation systems, and is the first large-scale deployed framework for ensuring fairness in the hiring domain.
Equity of Attention: Amortizing Individual Fairness in Rankings
- Computer ScienceSIGIR
- 2018
The challenge of achieving amortized individual fairness subject to constraints on ranking quality as an online optimization problem is formulated and solved as an integer linear program and it is demonstrated that the method can improve individual fairness while retaining high ranking quality.