Corpus ID: 234470087

Fairness and Discrimination in Information Access Systems

@article{Ekstrand2021FairnessAD,
  title={Fairness and Discrimination in Information Access Systems},
  author={Michael D. Ekstrand and Anubrata Das and R. Burke and Fernando Diaz},
  journal={ArXiv},
  year={2021},
  volume={abs/2105.05779}
}
Recommendation, information retrieval, and other information access systems pose unique challenges for investigating and applying the fairness and non-discrimination concepts that have been developed for studying other machine learning systems. While fair information access shares many commonalities with fair classi€cation, the multistakeholder nature of information access applications, the rank-based problem se‹ing, the centrality of personalization in many cases, and the role of user response… Expand

Figures and Tables from this paper

Problem Learning: Towards the Free Will of Machines
  • Yongfeng Zhang
  • Computer Science
  • ArXiv
  • 2021
A machine intelligence pipeline usually consists of six components: problem, representation, model, loss, optimizer and metric. Researchers have worked hard trying to automate many components of theExpand

References

SHOWING 1-10 OF 207 REFERENCES
Recommender Systems Fairness Evaluation via Generalized Cross Entropy
TLDR
It is argued that fairness in recommender systems does not necessarily imply equality, but instead it should consider a distribution of resources based on merits and needs, and a probabilistic framework based on generalized cross entropy is presented. Expand
Fairness Under Composition
TLDR
This work identifies pitfalls of naive composition and gives general constructions for fair composition, demonstrating both that classifiers that are fair in isolation do not necessarily compose into fair systems and also that seemingly unfair components may be carefully combined to construct fair systems. Expand
Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search
TLDR
This work presents a framework for quantifying and mitigating algorithmic bias in mechanisms designed for ranking individuals, typically used as part of web-scale search and recommendation systems, and is the first large-scale deployed framework for ensuring fairness in the hiring domain. Expand
iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making
TLDR
A method for probabilistically mapping user records into a lowrank representation that reconciles individual fairness and the utility of classifiers and rankings in downstream applications is introduced. Expand
Using Image Fairness Representations in Diversity-Based Re-ranking for Recommendations
TLDR
This work presents a fairness-aware variation of the Maximal Marginal Relevance re-ranking method which uses representations of demographic groups computed using a labeled dataset to incorporate fairness with respect to these demographic groups. Expand
Balanced Neighborhoods for Multi-sided Fairness in Recommendation
TLDR
This paper explores the concept of a balanced neighborhood as a mechanism to preserve personalization in recommendation while enhancing the fairness of recommendation outcomes and shows that a modified version of the Sparse Linear Method can be used to improve the balance of user and item neighborhoods. Expand
Quantifying the Impact of User Attentionon Fair Group Representation in Ranked Lists
TLDR
This work introduces a novel metric for auditing group fairness in ranked lists, and shows that determining fairness of a ranked output necessitates knowledge (or a model) of the end-users of the particular service. Expand
Fairness in Machine Learning: A Survey
TLDR
An overview of the different schools of thought and approaches to mitigating (social) biases and increase fairness in the Machine Learning literature is provided, organises approaches into the widely accepted framework of pre-processing, in- processing, and post-processing methods, subcategorizing into a further 11 method areas. Expand
Opportunistic Multi-aspect Fairness through Personalized Re-ranking
TLDR
It is shown that the opportunistic and metric-agnostic approach achieves a better trade-off between accuracy and fairness than prior re-ranking approaches and does so across multiple fairness dimensions. Expand
Policy Learning for Fairness in Ranking
TLDR
This work proposes a general LTR framework that can optimize a wide range of utility metrics while satisfying fairness of exposure constraints with respect to the items, and provides a new LTR algorithm called Fair-PG-Rank for directly searching the space of fair ranking policies via a policy-gradient approach. Expand
...
1
2
3
4
5
...