Corpus ID: 234470087

Fairness and Discrimination in Information Access Systems

  title={Fairness and Discrimination in Information Access Systems},
  author={Michael D. Ekstrand and Anubrata Das and R. Burke and Fernando Diaz},
Recommendation, information retrieval, and other information access systems pose unique challenges for investigating and applying the fairness and non-discrimination concepts that have been developed for studying other machine learning systems. While fair information access shares many commonalities with fair classi€cation, the multistakeholder nature of information access applications, the rank-based problem se‹ing, the centrality of personalization in many cases, and the role of user response… Expand

Figures and Tables from this paper

Multiversal Simulacra: Understanding Hypotheticals and Possible Worlds Through Simulation
My research agenda is particularly concerned with understanding the human biases that affect information retrieval and recommender systems, and quantifying their impact on the system’s operation,Expand
Revisiting Popularity and Demographic Biases in Recommender Evaluation and Effectiveness
It is found that total usage and the popularity of consumed content are strong predictors of recommender performance and also vary significantly across demographic groups, and that the utility is higher for users from countries with more representation in the dataset. Expand
Problem Learning: Towards the Free Will of Machines
A machine intelligence pipeline usually consists of six components: problem, representation, model, loss, optimizer and metric. Researchers have worked hard trying to automate many components of theExpand


Recommender Systems Fairness Evaluation via Generalized Cross Entropy
It is argued that fairness in recommender systems does not necessarily imply equality, but instead it should consider a distribution of resources based on merits and needs, and a probabilistic framework based on generalized cross entropy is presented. Expand
Fairness Under Composition
This work identifies pitfalls of naive composition and gives general constructions for fair composition, demonstrating both that classifiers that are fair in isolation do not necessarily compose into fair systems and also that seemingly unfair components may be carefully combined to construct fair systems. Expand
Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search
This work presents a framework for quantifying and mitigating algorithmic bias in mechanisms designed for ranking individuals, typically used as part of web-scale search and recommendation systems, and is the first large-scale deployed framework for ensuring fairness in the hiring domain. Expand
iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making
A method for probabilistically mapping user records into a lowrank representation that reconciles individual fairness and the utility of classifiers and rankings in downstream applications is introduced. Expand
Using Image Fairness Representations in Diversity-Based Re-ranking for Recommendations
This work presents a fairness-aware variation of the Maximal Marginal Relevance re-ranking method which uses representations of demographic groups computed using a labeled dataset to incorporate fairness with respect to these demographic groups. Expand
Balanced Neighborhoods for Multi-sided Fairness in Recommendation
This paper explores the concept of a balanced neighborhood as a mechanism to preserve personalization in recommendation while enhancing the fairness of recommendation outcomes and shows that a modified version of the Sparse Linear Method can be used to improve the balance of user and item neighborhoods. Expand
Quantifying the Impact of User Attentionon Fair Group Representation in Ranked Lists
This work introduces a novel metric for auditing group fairness in ranked lists, and shows that determining fairness of a ranked output necessitates knowledge (or a model) of the end-users of the particular service. Expand
Fairness in Machine Learning: A Survey
An overview of the different schools of thought and approaches to mitigating (social) biases and increase fairness in the Machine Learning literature is provided, organises approaches into the widely accepted framework of pre-processing, in- processing, and post-processing methods, subcategorizing into a further 11 method areas. Expand
Policy Learning for Fairness in Ranking
This work proposes a general LTR framework that can optimize a wide range of utility metrics while satisfying fairness of exposure constraints with respect to the items, and provides a new LTR algorithm called Fair-PG-Rank for directly searching the space of fair ranking policies via a policy-gradient approach. Expand
Opportunistic Multi-aspect Fairness through Personalized Re-ranking
It is shown that the opportunistic and metric-agnostic approach achieves a better trade-off between accuracy and fairness than prior re-ranking approaches and does so across multiple fairness dimensions. Expand