• Corpus ID: 235765682

The Price of Diversity

  title={The Price of Diversity},
  author={H. Bandi and Dimitris Bertsimas},
Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However, use of a template does not certify that the paper has been accepted for publication in the named journal. INFORMS journal templates are for the exclusive purpose of submitting to an INFORMS journal and should not be used to distribute the papers in print or online or to submit the papers to another publication. 

Figures and Tables from this paper


Certifying and Removing Disparate Impact
This work links disparate impact to a measure of classification accuracy that while known, has received relatively little attention and proposes a test for disparate impact based on how well the protected class can be predicted from the other attributes.
Inherent Trade-Offs in the Fair Determination of Risk Scores
Some of the ways in which key notions of fairness are incompatible with each other are suggested, and hence a framework for thinking about the trade-offs between them is provided.
When Do the Ends Justify the Means? Evaluating Procedural Fairness
How do people decide whether a political process is fair or unfair? Concerned about principles of justice, people might carefully evaluate procedural fairness based on the facts of the case.
Equality of Opportunity in Supervised Learning
This work proposes a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features and shows how to optimally adjust any learned predictor so as to remove discrimination according to this definition.
Discrimination-aware data mining
This approach leads to a precise formulation of the redlining problem along with a formal result relating discriminatory rules with apparently safe ones by means of background knowledge, and an empirical assessment of the results on the German credit dataset.
A Reductions Approach to Fair Classification
The key idea is to reduce fair classification to a sequence of cost-sensitive classification problems, whose solutions yield a randomized classifier with the lowest (empirical) error subject to the desired constraints.
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
It is demonstrated that the criteria cannot all be simultaneously satisfied when recidivism prevalence differs across groups, and how disparate impact can arise when an RPI fails to satisfy the criterion of error rate balance.
Algorithmic Decision Making and the Cost of Fairness
This work reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities, and also to human decision makers carrying out structured decision rules.
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings
This work empirically demonstrates that its algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks.
Computational Fairness: Preventing Machine-Learned Discrimination
This work proposes two methods of modifying data, called Combinatorial and Geometric repair, and shows that these repairs perform favorably in terms of training classifiers that are both accurate and unbiased.