• Corpus ID: 214667072

Overview of the TREC 2019 Fair Ranking Track

  title={Overview of the TREC 2019 Fair Ranking Track},
  author={Asia J. Biega and Fernando D. Diaz and Michael D. Ekstrand and Sebastian Kohlmeier},
The goal of the TREC Fair Ranking track was to develop a benchmark for evaluating retrieval systems in terms of fairness to different content providers in addition to classic notions of relevance. As part of the benchmark, we defined standardized fairness metrics with evaluation protocols and released a dataset for the fair ranking problem. The 2019 task focused on reranking academic paper abstracts given a query. The objective was to fairly represent relevant authors from several groups that… 

Figures and Tables from this paper

Comparing Fair Ranking Metrics

This work provides a direct comparative analysis identifying similarities and differences of fair ranking metrics selected for the work, and empirically compare them on the same experimental setup and data set.

Estimation of Fair Ranking Metrics with Incomplete Judgments

This work proposes a robust and unbiased estimator which can operate even with very limited number of labeled items and provides a robust, reliable alternative to exhaustive or random data annotation.

The University of Maryland at the TREC 2020 Fair Ranking Track

This paper developed an objective function that balances between relevance and fairness and leverage the flexibility of listwise Learning to Rank (LtR) techniques which directly optimize towards a custom evaluation measure.

Search results diversification for effective fair ranking in academic search

It is argued that generating fair rankings can be cast as a search results diversification problem across a number of assumed fairness groups, where groups can represent the demographics or other characteristics of information sources.

Incentives for Item Duplication Under Fair Ranking Policies

This work studies the behaviour of different fair ranking policies in the presence of duplicates, finding that fairness-aware ranking policies may conflict with diversity, due to their potential to incentivize duplication more than policies solely focused on relevance.

A Versatile Framework for Evaluating Ranked Lists in terms of Group Fairness and Relevance

A simple and versatile framework for evaluating ranked lists in terms of group fairness and relevance, where the groups can be either nominal or ordinal in nature, and can quantify intersectional group fairness based on multiple attribute sets is presented.

University of Washington at TREC 2020 Fairness Ranking Track

InfoSeeking Lab's FATE (Fairness Accountability Transparency Ethics) group at University of Washington participated in 2020 TREC Fairness Ranking Track and developed modules for these extractions in a way that allowed them to plug them in for either of the tasks as needed.

Fairness and Discrimination in Information Access Systems

This monograph presents a taxonomy of the various dimensions of fair information access and survey the literature to date on this new and rapidly-growing topic.

Fairness Through Regularization for Learning to Rank

Rank systems are typically designed to optimize for maximal utility and return the results most likely correct for each query, but this can have potentially harmful down-stream effects.

Pareto-Optimal Fairness-Utility Amortizations in Rankings with a DBN Exposure Model

This work constitutes the first exact algorithm able to efficiently find a Pareto-optimal distribution of rankings, applicable to a broad range of fairness notions, including classical notions of meritocratic and demographic fairness.



Quantifying the Impact of User Attentionon Fair Group Representation in Ranked Lists

This work introduces a novel metric for auditing group fairness in ranked lists, and shows that determining fairness of a ranked output necessitates knowledge (or a model) of the end-users of the particular service.

Ranking with Fairness Constraints

This work studies the following variant of the traditional ranking problem when the objective satisfies properties that appear in common ranking metrics such as Discounted Cumulative Gain, Spearman's rho or Bradley-Terry.

FA*IR: A Fair Top-k Ranking Algorithm

This work defines and solves the Fair Top-k Ranking problem, and presents an efficient algorithm, which is the first algorithm grounded in statistical tests that can mitigate biases in the representation of an under-represented group along a ranked list.

Comparing Fair Ranking Metrics

This work provides a direct comparative analysis identifying similarities and differences of fair ranking metrics selected for the work, and empirically compare them on the same experimental setup and data set.

Evaluating Stochastic Rankings with Expected Exposure

A general evaluation methodology based on expected exposure is proposed, allowing a system, in response to a query, to produce a distribution over rankings instead of a single fixed ranking.

Expected reciprocal rank for graded relevance

This work presents a new editorial metric for graded relevance which overcomes this difficulty and implicitly discounts documents which are shown below very relevant documents and calls it Expected Reciprocal Rank (ERR).

Fairness of Exposure in Rankings

This work proposes a conceptual and computational framework that allows the formulation of fairness constraints on rankings in terms of exposure allocation, and develops efficient algorithms for finding rankings that maximize the utility for the user while provably satisfying a specifiable notion of fairness.

Relevance and ranking in online dating systems

This work proposes a machine learned ranking function that makes use of features extracted from the uniquely rich user profiles that consist of both structured and unstructured attributes.

Measuring Fairness in Ranked Outputs

A data generation procedure is developed that allows for systematically control the degree of unfairness in the output, and the proposed fairness measures for ranked outputs are applied to several real datasets, and results show potential for improving fairness of ranked outputs while maintaining accuracy.

Equity of Attention: Amortizing Individual Fairness in Rankings

The challenge of achieving amortized individual fairness subject to constraints on ranking quality as an online optimization problem is formulated and solved as an integer linear program and it is demonstrated that the method can improve individual fairness while retaining high ranking quality.