Fair Ranking as Fair Division: Impact-Based Individual Fairness in Ranking

  title={Fair Ranking as Fair Division: Impact-Based Individual Fairness in Ranking},
  author={Yuta Saito and Thorsten Joachims},
  journal={Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
  • Yuta SaitoT. Joachims
  • Published 15 June 2022
  • Economics
  • Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
Rankings have become the primary interface in two-sided online markets. Many have noted that the rankings not only affect the satisfaction of the users (e.g., customers, listeners, employers, travelers), but that the position in the ranking allocates exposure -- and thus economic opportunity -- to the ranked items (e.g., articles, products, songs, job seekers, restaurants, hotels). This has raised questions of fairness to the items, and most existing works have addressed fairness by explicitlyโ€ฆย 

Figures and Tables from this paper

Fair Matrix Factorisation for Large-Scale Recommender Systems

This study takes a step towards solving real-world unfairness issues by developing a simple and scalable collaborative filtering method for fairness-aware item recommendation named fiADMM, which inherits the scalability of iALS and maintains a provable convergence guarantee.



Fairness of Exposure in Rankings

This work proposes a conceptual and computational framework that allows the formulation of fairness constraints on rankings in terms of exposure allocation, and develops efficient algorithms for finding rankings that maximize the utility for the user while provably satisfying a specifiable notion of fairness.

Controlling Fairness and Bias in Dynamic Learning-to-Rank

This work proposes a learning algorithm that ensures notions of amortized group fairness, while simultaneously learning the ranking function from implicit feedback data, and finds empirically that the algorithm is highly practical and robust.

Equity of Attention: Amortizing Individual Fairness in Rankings

The challenge of achieving amortized individual fairness subject to constraints on ranking quality as an online optimization problem is formulated and solved as an integer linear program and it is demonstrated that the method can improve individual fairness while retaining high ranking quality.

Two-sided fairness in rankings via Lorenz dominance

This work proposes to generate rankings by maximizing concave welfare functions, and develops an efficient inference procedure based on the Frank-Wolfe algorithm that guarantees that rankings are Pareto efficient, and that they maximally redistribute utility from better-off to worse-off, at a given level of overall utility.

Maximizing Marginal Fairness for Dynamic Learning to Rank

A fair and unbiased ranking method named Maximal Marginal Fairness (MMF), which integrates unbiased estimators for both relevance and merit-based fairness while providing an explicit controller that balances the selection of documents to maximize the marginal relevance and fairness in top-k results.

Ranking with Fairness Constraints

This work studies the following variant of the traditional ranking problem when the objective satisfies properties that appear in common ranking metrics such as Discounted Cumulative Gain, Spearman's rho or Bradley-Terry.

Measuring Fairness in Ranked Outputs

A data generation procedure is developed that allows for systematically control the degree of unfairness in the output, and the proposed fairness measures for ranked outputs are applied to several real datasets, and results show potential for improving fairness of ranked outputs while maintaining accuracy.

Fair ranking: a critical review, challenges, and future directions

Ranking, recommendation, and retrieval systems are widely used in online platforms and other societal systems, including e-commerce, media-streaming, admissions, gig platforms, and hiring. In theโ€ฆ

Policy-Gradient Training of Fair and Unbiased Ranking Functions

This work presents the first learning-to-rank approach that addresses both presentation bias and merit-based fairness of exposure simultaneously, and defines a class of amortized fairness-of-exposure constraints that can be chosen based on the needs of an application, and shows how these fairness criteria can be enforced despite the selection biases in implicit feedback data.

Policy Learning for Fairness in Ranking

This work proposes a general LTR framework that can optimize a wide range of utility metrics while satisfying fairness of exposure constraints with respect to the items, and provides a new LTR algorithm called Fair-PG-Rank for directly searching the space of fair ranking policies via a policy-gradient approach.