• Corpus ID: 221470392

Comparing Fair Ranking Metrics

@article{Raj2020ComparingFR,
  title={Comparing Fair Ranking Metrics},
  author={Amifa Raj and Connor Wood and Ananda Montoly and Michael D. Ekstrand},
  journal={ArXiv},
  year={2020},
  volume={abs/2009.01311}
}
Ranking is a fundamental aspect of recommender systems. However, ranked outputs can be susceptible to various biases; some of these may cause disadvantages to members of protected groups. Several metrics have been proposed to quantify the (un)fairness of rankings, but there has not been to date any direct comparison of these metrics. This complicates deciding what fairness metrics are applicable for specific scenarios, and assessing the extent to which metrics agree or disagree. In this paper… 

Figures and Tables from this paper

FAIR: Fairness‐aware information retrieval evaluation

By unifying standard IR metrics and fairness measures into an integrated metric, this metric offers a new perspective for evaluating fairness‐aware ranking results and developed an effective ranking algorithm that jointly optimized user utility and fairness.

A Versatile Framework for Evaluating Ranked Lists in terms of Group Fairness and Relevance

A simple and versatile framework for evaluating ranked lists in terms of group fairness and relevance, where the groups can be either nominal or ordinal in nature, and can quantify intersectional group fairness based on multiple attribute sets is presented.

Overview of the TREC 2019 Fair Ranking Track

An overview of the TREC Fair Ranking track is presented, including the task definition, descriptions of the data and the annotation process, as well as a comparison of the performance of submitted systems.

Random Isn't Always Fair: Candidate Set Imbalance and Exposure Inequality in Recommender Systems

It is shown that complete randomization at the second step can result in a higher degree of inequality relative to deterministic ordering of items by estimated relevance scores, and a simple post-processing algorithm is proposed in pursuit of reducing exposure inequality.

When Fair Ranking Meets Uncertain Inference

It is shown how demographic inferences drawn from real systems can lead to unfair rankings, and developers should not use inferred demographic data as input to fair ranking algorithms, unless the inferences are extremely accurate.

Probabilistic Permutation Graph Search: Black-Box Optimization for Fairness in Ranking

A novel way of representing permutation distributions, based on the notion of permutation graphs, is presented, which improves over~\acPL for optimizing fairness metrics for queries with one session and is suitable for both deterministic and stochastic rankings.

Exploring author gender in book rating and recommendation

This work measures the distribution of the genders of the authors of books in user rating profiles and recommendation lists produced from this data to find that common collaborative filtering algorithms differ in the gender distribution of their recommendation lists, and in the relationship of that output distribution to user profile distribution.

Toward Fair Recommendation in Two-sided Platforms

This work proposes a modification of FairRec (named as FairRecPlus) that at the cost of additional computation time, improves the recommendation performance for the customers, while maintaining the same fairness guarantees.

Algorithmic fairness datasets: the story so far

This work surveys over two hundred datasets employed in algorithmic fairness research, and produces standardized and searchable documentation for each of them, rigorously identifying the three most popular fairness datasets, namely Adult, COMPAS, and German Credit, for which this unifying documentation effort supports multiple contributions.

References

SHOWING 1-10 OF 30 REFERENCES

Fairness in Recommendation Ranking through Pairwise Comparisons

This paper offers a set of novel metrics for evaluating algorithmic fairness concerns in recommender systems and shows how measuring fairness based on pairwise comparisons from randomized experiments provides a tractable means to reason about fairness in rankings fromRecommender systems.

Estimation of Fair Ranking Metrics with Incomplete Judgments

This work proposes a robust and unbiased estimator which can operate even with very limited number of labeled items and provides a robust, reliable alternative to exhaustive or random data annotation.

Measuring Group Advantage: A Comparative Study of Fair Ranking Metrics

It is proved that under reasonable assumptions, popular metrics in the literature exhibit the same behavior and that optimizing for one optimizes for all, and a practical statistical test is designed to identify whether observed data is likely to exhibit predictable group bias.

Fairness in Ranking: A Survey

An important contribution of this work is in developing a common narrative around the value frameworks that motivate specific fairness-enhancing interventions in ranking, which allows to unify the presentation of mitigation objectives and of algorithmic techniques to help meet those objectives or identify trade-offs.

Measuring Fairness in Ranked Outputs

A data generation procedure is developed that allows for systematically control the degree of unfairness in the output, and the proposed fairness measures for ranked outputs are applied to several real datasets, and results show potential for improving fairness of ranked outputs while maintaining accuracy.

Equity of Attention: Amortizing Individual Fairness in Rankings

The challenge of achieving amortized individual fairness subject to constraints on ranking quality as an online optimization problem is formulated and solved as an integer linear program and it is demonstrated that the method can improve individual fairness while retaining high ranking quality.

FA*IR: A Fair Top-k Ranking Algorithm

This work defines and solves the Fair Top-k Ranking problem, and presents an efficient algorithm, which is the first algorithm grounded in statistical tests that can mitigate biases in the representation of an under-represented group along a ranked list.

Overview of the TREC 2019 Fair Ranking Track

An overview of the TREC Fair Ranking track is presented, including the task definition, descriptions of the data and the annotation process, as well as a comparison of the performance of submitted systems.

Fairness of Exposure in Rankings

This work proposes a conceptual and computational framework that allows the formulation of fairness constraints on rankings in terms of exposure allocation, and develops efficient algorithms for finding rankings that maximize the utility for the user while provably satisfying a specifiable notion of fairness.

Quantifying the Impact of User Attentionon Fair Group Representation in Ranked Lists

This work introduces a novel metric for auditing group fairness in ranked lists, and shows that determining fairness of a ranked output necessitates knowledge (or a model) of the end-users of the particular service.