Equity of Attention: Amortizing Individual Fairness in Rankings

@article{Biega2018EquityOA,
  title={Equity of Attention: Amortizing Individual Fairness in Rankings},
  author={Asia J. Biega and Krishna P. Gummadi and Gerhard Weikum},
  journal={The 41st International ACM SIGIR Conference on Research \& Development in Information Retrieval},
  year={2018}
}
  • Asia J. Biega, K. Gummadi, G. Weikum
  • Published 2018
  • Computer Science
  • The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval
Rankings of people and items are at the heart of selection-making, match-making, and recommender systems, ranging from employment sites to sharing economy platforms. As ranking positions influence the amount of attention the ranked subjects receive, biases in rankings can lead to unfair distribution of opportunities and resources such as jobs or income. This paper proposes new measures and mechanisms to quantify and mitigate unfairness from a bias inherent to all rankings, namely, the position… Expand
Maximizing Marginal Fairness for Dynamic Learning to Rank
TLDR
A fair and unbiased ranking method named Maximal Marginal Fairness (MMF), which integrates unbiased estimators for both relevance and merit-based fairness while providing an explicit controller that balances the selection of documents to maximize the marginal relevance and fairness in top-k results. Expand
Ranking for Individual and Group Fairness Simultaneously
TLDR
This paper defines individual fairness based on how close the predicted rank of each item is to its true rank, and proves a lower bound on the trade-off achievable for simultaneous individual and group fairness in ranking. Expand
User Fairness, Item Fairness, and Diversity for Rankings in Two-Sided Markets
TLDR
It is shown in this paper that user fairness, item fairness and diversity are fundamentally different concepts, and it is found that algorithms that consider only one of the three desiderata can fail to satisfy and even harm the other two. Expand
Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search
TLDR
This work presents a framework for quantifying and mitigating algorithmic bias in mechanisms designed for ranking individuals, typically used as part of web-scale search and recommendation systems, and is the first large-scale deployed framework for ensuring fairness in the hiring domain. Expand
Fairness of Exposure in Rankings
TLDR
This work proposes a conceptual and computational framework that allows the formulation of fairness constraints on rankings in terms of exposure allocation, and develops efficient algorithms for finding rankings that maximize the utility for the user while provably satisfying a specifiable notion of fairness. Expand
Policy-Gradient Training of Fair and Unbiased Ranking Functions
TLDR
This work presents the first learning-to-rank approach that addresses both presentation bias and merit-based fairness of exposure simultaneously, and defines a class of amortized fairness-of-exposure constraints that can be chosen based on the needs of an application, and shows how these fairness criteria can be enforced despite the selection biases in implicit feedback data. Expand
Fair Learning-to-Rank from Implicit Feedback
TLDR
A novel learning-to-rank framework, FULTR, that is the first to address both intrinsic and extrinsic reasons of unfairness when learning ranking policies from logged implicit feedback and provides an efficient algorithm that optimizes both utility and fairness via a policy-gradient approach. Expand
Controlling Fairness and Bias in Dynamic Learning-to-Rank (Extended Abstract)
TLDR
This work proposes a learning algorithm that ensures notions of amortized group fairness while simultaneously learning the ranking function from implicit feedback data, and takes the form of a controller that integrates unbiased estimators for both fairness and utility. Expand
Controlling Fairness and Bias in Dynamic Learning-to-Rank
TLDR
This work proposes a learning algorithm that ensures notions of amortized group fairness, while simultaneously learning the ranking function from implicit feedback data, and finds empirically that the algorithm is highly practical and robust. Expand
Policy Learning for Fairness in Ranking
TLDR
This work proposes a general LTR framework that can optimize a wide range of utility metrics while satisfying fairness of exposure constraints with respect to the items, and provides a new LTR algorithm called Fair-PG-Rank for directly searching the space of fair ranking policies via a policy-gradient approach. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 42 REFERENCES
Measuring Fairness in Ranked Outputs
TLDR
A data generation procedure is developed that allows for systematically control the degree of unfairness in the output, and the proposed fairness measures for ranked outputs are applied to several real datasets, and results show potential for improving fairness of ranked outputs while maintaining accuracy. Expand
Fairness of Exposure in Rankings
TLDR
This work proposes a conceptual and computational framework that allows the formulation of fairness constraints on rankings in terms of exposure allocation, and develops efficient algorithms for finding rankings that maximize the utility for the user while provably satisfying a specifiable notion of fairness. Expand
Ranking with Fairness Constraints
TLDR
This work studies the following variant of the traditional ranking problem when the objective satisfies properties that appear in common ranking metrics such as Discounted Cumulative Gain, Spearman's rho or Bradley-Terry. Expand
Meritocratic Fairness for Cross-Population Selection
TLDR
This work quantifies the regret in quality imposed by “meritocratic” notions of fairness, which require that individuals are selected with probability that is monotonically increasing in their true quality. Expand
FA*IR: A Fair Top-k Ranking Algorithm
TLDR
This work defines and solves the Fair Top-k Ranking problem, and presents an efficient algorithm, which is the first algorithm grounded in statistical tests that can mitigate biases in the representation of an under-represented group along a ranked list. Expand
Fairness through awareness
TLDR
A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented. Expand
Fair Sharing for Sharing Economy Platforms
Sharing economy platforms, such as Airbnb, Uber or eBay, are an increasingly common way for people to provide their services to earn a living. Yet, the focus in these platforms is either on theExpand
Beyond Distributive Fairness in Algorithmic Decision Making: Feature Selection for Procedurally Fair Learning
TLDR
This work proposes measures for procedural fairness that consider the input features used in the decision process, and evaluate the moral judgments of humans regarding the use of these features, and operationalizes these measures on two real world datasets using human surveys on the Amazon Mechanical Turk platform. Expand
Dominant Resource Fairness: Fair Allocation of Multiple Resource Types
TLDR
Dominant Resource Fairness (DRF), a generalization of max-min fairness to multiple resource types, is proposed, and it is shown that it leads to better throughput and fairness than the slot-based fair sharing schemes in current cluster schedulers. Expand
Learning Fair Representations
We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to theExpand
...
1
2
3
4
5
...