Measuring Fairness in Ranked Results: An Analytical and Empirical Comparison

  title={Measuring Fairness in Ranked Results: An Analytical and Empirical Comparison},
  author={Amifa Raj and Michael D. Ekstrand},
  journal={Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval},
  • Amifa Raj, Michael D. Ekstrand
  • Published 6 July 2022
  • Computer Science
  • Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval
Information access systems, such as search and recommender systems, often use ranked lists to present results believed to be relevant to the user's information need. Evaluating these lists for their fairness along with other traditional metrics provides a more complete understanding of an information access system's behavior beyond accuracy or utility constructs. To measure the (un)fairness of rankings, particularly with respect to the protected group(s) of producers or providers, several… 
1 Citations

Figures and Tables from this paper

Fair Ranking Metrics

  • Amifa Raj
  • Computer Science, Business
    Sixteenth ACM Conference on Recommender Systems
  • 2022
This thesis works on the area of fair ranking metrics for provider-side group fairness, interested in understanding the fairness concepts and practical applications of these metrics to identify their strength and limitations to aid the researchers and practitioner by pointing out the gaps.



Measuring Group Advantage: A Comparative Study of Fair Ranking Metrics

It is proved that under reasonable assumptions, popular metrics in the literature exhibit the same behavior and that optimizing for one optimizes for all, and a practical statistical test is designed to identify whether observed data is likely to exhibit predictable group bias.

Measuring Fairness in Ranked Outputs

A data generation procedure is developed that allows for systematically control the degree of unfairness in the output, and the proposed fairness measures for ranked outputs are applied to several real datasets, and results show potential for improving fairness of ranked outputs while maintaining accuracy.

Fairness and Discrimination in Information Access Systems

This monograph presents a taxonomy of the various dimensions of fair information access and survey the literature to date on this new and rapidly-growing topic.

Fairness in Recommendation Ranking through Pairwise Comparisons

This paper offers a set of novel metrics for evaluating algorithmic fairness concerns in recommender systems and shows how measuring fairness based on pairwise comparisons from randomized experiments provides a tractable means to reason about fairness in rankings fromRecommender systems.

Quantifying the Impact of User Attentionon Fair Group Representation in Ranked Lists

This work introduces a novel metric for auditing group fairness in ranked lists, and shows that determining fairness of a ranked output necessitates knowledge (or a model) of the end-users of the particular service.

Equity of Attention: Amortizing Individual Fairness in Rankings

The challenge of achieving amortized individual fairness subject to constraints on ranking quality as an online optimization problem is formulated and solved as an integer linear program and it is demonstrated that the method can improve individual fairness while retaining high ranking quality.

Estimation of Fair Ranking Metrics with Incomplete Judgments

This work proposes a robust and unbiased estimator which can operate even with very limited number of labeled items and provides a robust, reliable alternative to exhaustive or random data annotation.

FA*IR: A Fair Top-k Ranking Algorithm

This work defines and solves the Fair Top-k Ranking problem, and presents an efficient algorithm, which is the first algorithm grounded in statistical tests that can mitigate biases in the representation of an under-represented group along a ranked list.

Evaluating Stochastic Rankings with Expected Exposure

A general evaluation methodology based on expected exposure is proposed, allowing a system, in response to a query, to produce a distribution over rankings instead of a single fixed ranking.

When Fair Ranking Meets Uncertain Inference

It is shown how demographic inferences drawn from real systems can lead to unfair rankings, and developers should not use inferred demographic data as input to fair ranking algorithms, unless the inferences are extremely accurate.