• Corpus ID: 53961090

Algorithmic Bias : A Counterfactual Perspective

  title={Algorithmic Bias : A Counterfactual Perspective},
  author={Bo Cowgill and Catherine Tucker},
We discuss an alternative approach to measuring bias and fairness in machine learning: Counterfactual evaluation. In many practical settings, the alternative to a biased algorithm is not an unbiased one, but another decision method such as another algorithm or human discretion. We discuss statistical techniques necessary for counterfactual comparisons, which enable researchers to quantify relative biases without access to the underlying algorithm or its training data. We close by discussing the… 
AI and Algorithmic Bias: Source, Detection, Mitigation and Implications
This tutorial discusses five important aspects of algorithmic bias, including its definition and the notions of fairness policy makers, practitioners, and academic researchers have used and proposed, and methods for bias detection.
Mitigating Bias in Algorithmic Systems - A Fish-Eye View
The literature describes three steps toward a comprehensive treatment – bias detection, fairness management and explainability management – and underscores the need to work from within the system as well as from the perspective of stakeholders in the broader context.
What is the Bureaucratic Counterfactual? Categorical versus Algorithmic Prioritization in U.S. Social Policy
There is growing concern about governments’ use of algorithms to make high-stakes decisions. While an early wave of research focused on algorithms that predict risk to allocate punishment and
Procedural Justice and Risk-Assessment Algorithms
Statistical algorithms are increasingly used in the criminal justice system. Much of the recent scholarship on the use of these algorithms have focused on their "fairness," typically defined as
How Machine Learning Mitigates Racial Bias in the U.S. Housing Market
  • G. Lu
  • Economics
    SSRN Electronic Journal
  • 2019
I examine racial bias in the most popular home valuation algorithm and study the algorithm’s impact on racial bias in transaction prices. I find statistically significant but economically small
Algorithmic Risk Assessment in the Hands of Humans
We evaluate the impacts of adopting algorithmic predictions of future offending (risk assessments) as an aid to judicial discretion in felony sentencing. We find that judges' decisions are influenced
Correspondences between Privacy and Nondiscrimination: Why They Should Be Studied Together
It is shown how the introduced correspondence allows results from one area of research to be used for the other, and in particular, privacy admits both Bayesian and frequentist interpretations whereas nondiscrimination is limited to the frequentist interpretation.
Fragile Algorithms and Fallible Decision-Makers: Lessons from the Justice System
Algorithms (in some form) are already widely used in the criminal justice system. We draw lessons from this experience for what is to come for the rest of society as machine learning diffuses. We
Human intervention in automated decision-making: Toward the construction of contestable systems
The thesis that proper protection of the rights of data subjects is feasible only if there are means for contesting decisions based solely on automated processing is advanced, which is not an afterthought, but instead a requirement at each stage of an artificial intelligence system's lifecycle.
Bias in word embeddings
A new technique for bias detection for gendered languages is developed and used to compare bias in embeddings trained on Wikipedia and on political social media data, and it is proved that existing biases are transferred to further machine learning models.


False Positives, False Negatives, and False Analyses: A Rejoinder to "Machine Bias: There's Software Used across the Country to Predict Future Criminals. and It's Biased against Blacks"
PROPUBLICA RECENTLY RELEASED a much-heralded investigative report claim­ ing that a risk assessment tool (known as the COMPAS) used in criminal justice is biased against black defendants.12 The
Fairer and more accurate, but for whom?
A model comparison framework for automatically identifying subgroups in which the differences between models are most pronounced, with a primary focus on identifying sub groups where the models differ in terms of fairness-related quantities such as racial or gender disparities is introduced.
Human Decisions and Machine Predictions
While machine learning can be valuable, realizing this value requires integrating these tools into an economic framework: being clear about the link between predictions and decisions; specifying the scope of payoff functions; and constructing unbiased decision counterfactuals.
Algorithmic bias? An empirical study into apparent gender-based discrimination in the display of STEM career ads
We explore data from a field test of how an algorithm delivered ads promoting job opportunities in the Science, Technology, Engineering and Math (STEM) fields. This ad was explicitly intended to be
Estimating causal effects of treatments in randomized and nonrandomized studies.
A discussion of matching, randomization, random sampling, and other methods of controlling extraneous variation is presented. The objective is to specify the benefits of randomization in estimating
Automating Judgement and Decisionmaking : Theory and Evidence from Résumé Screening
What types of decisionmaking tasks are better automated? And which are better left to judgement? I develop a formal model of the comparative advantages of human judgement and machines in
Towards A Rigorous Science of Interpretable Machine Learning
This position paper defines interpretability and describes when interpretability is needed (and when it is not), and suggests a taxonomy for rigorous evaluation and exposes open questions towards a more rigorous science of interpretable machine learning.
How algorithms impact judicial decisions
  • 2017