• Corpus ID: 221470392

Comparing Fair Ranking Metrics

  title={Comparing Fair Ranking Metrics},
  author={Amifa Raj and Connor Wood and Ananda Montoly and Michael D. Ekstrand},
Ranking is a fundamental aspect of recommender systems. However, ranked outputs can be susceptible to various biases; some of these may cause disadvantages to members of protected groups. Several metrics have been proposed to quantify the (un)fairness of rankings, but there has not been to date any direct comparison of these metrics. This complicates deciding what fairness metrics are applicable for specific scenarios, and assessing the extent to which metrics agree or disagree. In this paper… 

Figures and Tables from this paper

FAIR: Fairness-Aware Information Retrieval Evaluation
This work proposes a new metric called Fairness-Aware IR (FAIR), and develops an effective ranking algorithm that jointly optimized user utility and fairness and showed how FAIR related to existing metrics and demonstrated the effectiveness of the FAIR-based algorithm.
When Fair Ranking Meets Uncertain Inference
It is shown how demographic inferences drawn from real systems can lead to unfair rankings, and developers should not use inferred demographic data as input to fair ranking algorithms, unless the inferences are extremely accurate.
Toward Fair Recommendation in Two-sided Platforms
This work proposes to provide fairness guarantees for both sides of a fair personalized recommendation problem to a constrained version of the problem of fairly allocating indivisible goods, and presents a modification of FairRec that improves the recommendation performance for the customers, while maintaining the same fairness guarantees.
Fairness and Discrimination in Information Access Systems
This monograph presents a taxonomy of the various dimensions of fair information access and survey the literature to date on this new and rapidly-growing topic.
Exploring author gender in book rating and recommendation
It is found that common collaborative filtering algorithms tend to propagate at least some of each user’s tendency to rate or read male or female authors into their resulting recommendations, although they differ in both the strength of this propagation and the variance in the gender balance of the recommendation lists they produce.
Addressing Bias and Fairness in Search Systems
This tutorial will introduce the issues of biases in data, in algorithms, and overall in search processes and show how to think about and create systems that are fairer, with increasing diversity and transparency.
Pink for Princesses, Blue for Superheroes: The Need to Examine Gender Stereotypes in Kid's Products in Search and Recommendations
The need to investigate if and how gender stereotypes manifest in search and recommender systems is argued for and an agenda to support future research addressing the phenomena is outlined.


Fairness in Recommendation Ranking through Pairwise Comparisons
This paper offers a set of novel metrics for evaluating algorithmic fairness concerns in recommender systems and shows how measuring fairness based on pairwise comparisons from randomized experiments provides a tractable means to reason about fairness in rankings fromRecommender systems.
Estimation of Fair Ranking Metrics with Incomplete Judgments
This work proposes a robust and unbiased estimator which can operate even with very limited number of labeled items and provides a robust, reliable alternative to exhaustive or random data annotation.
Measuring Group Advantage: A Comparative Study of Fair Ranking Metrics
It is proved that under reasonable assumptions, popular metrics in the literature exhibit the same behavior and that optimizing for one optimizes for all, and a practical statistical test is designed to identify whether observed data is likely to exhibit predictable group bias.
Fairness in Ranking: A Survey
An important contribution of this work is in developing a common narrative around the value frameworks that motivate specific fairness-enhancing interventions in ranking, which allows to unify the presentation of mitigation objectives and of algorithmic techniques to help meet those objectives or identify trade-offs.
Measuring Fairness in Ranked Outputs
A data generation procedure is developed that allows for systematically control the degree of unfairness in the output, and the proposed fairness measures for ranked outputs are applied to several real datasets, and results show potential for improving fairness of ranked outputs while maintaining accuracy.
Equity of Attention: Amortizing Individual Fairness in Rankings
The challenge of achieving amortized individual fairness subject to constraints on ranking quality as an online optimization problem is formulated and solved as an integer linear program and it is demonstrated that the method can improve individual fairness while retaining high ranking quality.
FA*IR: A Fair Top-k Ranking Algorithm
This work defines and solves the Fair Top-k Ranking problem, and presents an efficient algorithm, which is the first algorithm grounded in statistical tests that can mitigate biases in the representation of an under-represented group along a ranked list.
Overview of the TREC 2019 Fair Ranking Track
An overview of the TREC Fair Ranking track is presented, including the task definition, descriptions of the data and the annotation process, as well as a comparison of the performance of submitted systems.
Fairness of Exposure in Rankings
This work proposes a conceptual and computational framework that allows the formulation of fairness constraints on rankings in terms of exposure allocation, and develops efficient algorithms for finding rankings that maximize the utility for the user while provably satisfying a specifiable notion of fairness.
Quantifying the Impact of User Attentionon Fair Group Representation in Ranked Lists
This work introduces a novel metric for auditing group fairness in ranked lists, and shows that determining fairness of a ranked output necessitates knowledge (or a model) of the end-users of the particular service.