Measuring Fairness in Ranked Outputs

@article{Yang2017MeasuringFI,
  title={Measuring Fairness in Ranked Outputs},
  author={Ke Yang and Julia Stoyanovich},
  journal={Proceedings of the 29th International Conference on Scientific and Statistical Database Management},
  year={2017}
}
  • Ke Yang, Julia Stoyanovich
  • Published 2017
  • Computer Science
  • Proceedings of the 29th International Conference on Scientific and Statistical Database Management
Ranking and scoring are ubiquitous. We consider the setting in which an institution, called a ranker, evaluates a set of individuals based on demographic, behavioral or other characteristics. The final output is a ranking that represents the relative quality of the individuals. While automatic and therefore seemingly objective, rankers can, and often do, discriminate against individuals and systematically disadvantage members of protected groups. This warrants a careful study of the fairness of… Expand
Ranking for Individual and Group Fairness Simultaneously
TLDR
This paper defines individual fairness based on how close the predicted rank of each item is to its true rank, and proves a lower bound on the trade-off achievable for simultaneous individual and group fairness in ranking. Expand
Measuring Group Advantage: A Comparative Study of Fair Ranking Metrics
TLDR
It is proved that under reasonable assumptions, popular metrics in the literature exhibit the same behavior and that optimizing for one optimizes for all, and a practical statistical test is designed to identify whether observed data is likely to exhibit predictable group bias. Expand
User Fairness, Item Fairness, and Diversity for Rankings in Two-Sided Markets
Ranking items by their probability of relevance has long been the goal of conventional ranking systems. While this maximizes traditional criteria of ranking performance, there is a growingExpand
Maximizing Marginal Fairness for Dynamic Learning to Rank
TLDR
A fair and unbiased ranking method named Maximal Marginal Fairness (MMF), which integrates unbiased estimators for both relevance and merit-based fairness while providing an explicit controller that balances the selection of documents to maximize the marginal relevance and fairness in top-k results. Expand
Designing Fair Ranking Schemes
TLDR
This paper develops a system that helps users choose criterion weights that lead to greater fairness, and shows how to efficiently identify regions in this space that satisfy a broad range of fairness criteria. Expand
A Nutritional Label for Rankings
TLDR
Ranking Facts is a Web-based application that generates a "nutritional label" for rankings that implements the latest research results on fairness, stability, and transparency for rankings, and that communicate details of the ranking methodology, or of the output, to the end user. Expand
Equity of Attention: Amortizing Individual Fairness in Rankings
TLDR
The challenge of achieving amortized individual fairness subject to constraints on ranking quality as an online optimization problem is formulated and solved as an integer linear program and it is demonstrated that the method can improve individual fairness while retaining high ranking quality. Expand
Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search
TLDR
This work presents a framework for quantifying and mitigating algorithmic bias in mechanisms designed for ranking individuals, typically used as part of web-scale search and recommendation systems, and is the first large-scale deployed framework for ensuring fairness in the hiring domain. Expand
FARE: Diagnostics for Fair Ranking using Pairwise Error Metrics
TLDR
This work designs a fair auditing mechanism which captures group treatment throughout the entire ranking, generating in-depth yet nuanced diagnostics, and demonstrates the efficacy of the error metrics using real-world scenarios, exposing trade-offs among fairness criteria and providing guidance in the selection of fair-ranking algorithms. Expand
On the Problem of Underranking in Group-Fair Ranking
TLDR
A fair ranking algorithm is given that takes any given ranking and outputs another ranking with simultaneous under ranking and group fairness guarantees comparable to the lower bound on the tradeoff achievable for simultaneous underranking and groupfair in ranking. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 18 REFERENCES
Fairness through awareness
TLDR
A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented. Expand
Learning Fair Representations
We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to theExpand
[89WashLRev0001] The Scored Society: Due Process for Automated Predictions
TLDR
P Procedural regularity is essential for those stigmatized by “artificially intelligent” scoring systems, and regulators should be able to test scoring systems to ensure their fairness and accuracy. Expand
Making interval-based clustering rank-aware
TLDR
BARAC, a particular subspace-clustering algorithm that enables rank-aware interval-based clustering in domains with heterogeneous attributes, is presented and a novel measure of locality is proposed, together with a family of clustering quality measures appropriate for this application scenario. Expand
Accountable Algorithms
Many important decisions historically made by people are now made by computers. Algorithms count votes, approve loan and credit card applications, target citizens or neighborhoods for policeExpand
A survey on measuring indirect discrimination in machine learning
TLDR
This survey review and organize various discrimination measures that have been used for measuring discrimination in data, as well as in evaluating performance of discrimination-aware predictive models, and computationally analyze properties of selected measures. Expand
Cumulated gain-based evaluation of IR techniques
TLDR
This article proposes several novel measures that compute the cumulative gain the user obtains by examining the retrieval result up to a given ranked position, and test results indicate that the proposed measures credit IR methods for their ability to retrieve highly relevant documents and allow testing of statistical significance of effectiveness differences. Expand
Discovering Unwarranted Associations in Data-Driven Applications with the FairTest Testing Toolkit
TLDR
FairTest is a testing toolkit that detects unwarranted associations between an algorithm's outputs and user subpopulations and ranks them by their strength while accounting for known explanatory factors, designed for ease of use by programmers and integrated into the evaluation framework of SciPy. Expand
The (Im)possibility of fairness
What does it mean to be fair?
CSRankings: Computer science rankings, 2016
  • 2016
...
1
2
...