• Corpus ID: 239016205

Towards More Accountable Search Engines: Online Evaluation of Representation Bias

  title={Towards More Accountable Search Engines: Online Evaluation of Representation Bias},
  author={Aldo Lipani and Florina Piroi and Emine Yilmaz},
Information availability affects people’s behavior and perception of the world. Notably, people rely on search engines to satisfy their need for information. Search engines deliver results relevant to user requests usually without being or making themselves accountable for the information they deliver, which may harm people’s lives and, in turn, society. This potential risk urges the development of evaluation mechanisms of bias in order to empower the user in judging the results of search… 

Figures and Tables from this paper

Where the Earth is flat and 9/11 is an inside job: A comparative algorithm audit of conspiratorial information in web search results
A comparative algorithm audit to examine the distribution of conspiratorial information in search results across five search engines finds that all search engines except Google consistently displayed conspiracy-promoting results and returned links to conspiracy-dedicated websites in their top results, although the share of such content varied across queries.


Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search
This work presents a framework for quantifying and mitigating algorithmic bias in mechanisms designed for ranking individuals, typically used as part of web-scale search and recommendation systems, and is the first large-scale deployed framework for ensuring fairness in the hiring domain.
FACTS-IR: Fairness, Accountability, Confidentiality, Transparency, and Safety in Information Retrieval
The purpose of the SIGIR 2019 workshop on Fairness, Accountability, Confidentiality, Transparency, and Safety (FACTS-IR) was to explore challenges in responsible information retrieval system development and deployment and draft an actionable research agenda.
From Freebase to Wikidata: The Great Migration
The Primary Sources Tool is described, which aims to facilitate this and future data migrations and report on the ongoing transfer efforts and data mapping challenges.
Bias in computer systems
It is suggested that freedom from bias should by counted the select set of criteria—including reliability, accuracy, and efficiency—according to which the quality of systems in use in society should be judged.
Measuring Fairness in Ranked Outputs
A data generation procedure is developed that allows for systematically control the degree of unfairness in the output, and the proposed fairness measures for ranked outputs are applied to several real datasets, and results show potential for improving fairness of ranked outputs while maintaining accuracy.
Equality of Opportunity in Supervised Learning
This work proposes a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features and shows how to optimally adjust any learned predictor so as to remove discrimination according to this definition.
FA*IR: A Fair Top-k Ranking Algorithm
This work defines and solves the Fair Top-k Ranking problem, and presents an efficient algorithm, which is the first algorithm grounded in statistical tests that can mitigate biases in the representation of an under-represented group along a ranked list.
Stereotype Threat and Women's Math Performance
Abstract When women perform math, unlike men, they risk being judged by the negative stereotype that women have weaker math ability. We call this predicament stereotype threat and hypothesize that
Yucel Saygin, and Emine Yilmaz. 2021. Evaluation metrics for measuring bias in search engine results
  • Information Retrieval Journal
  • 2021