Certifying and Removing Disparate Impact

@inproceedings{Feldman2015CertifyingAR,
  title={Certifying and Removing Disparate Impact},
  author={M. Feldman and S. Friedler and J. Moeller and C. Scheidegger and S. Venkatasubramanian},
  booktitle={KDD '15},
  year={2015}
}
  • M. Feldman, S. Friedler, +2 authors S. Venkatasubramanian
  • Published in KDD '15 2015
  • Computer Science, Mathematics
  • What does it mean for an algorithm to be biased? In U.S. law, unintentional bias is encoded via disparate impact, which occurs when a selection process has widely different outcomes for different groups, even as it appears to be neutral. This legal determination hinges on a definition of a protected class (ethnicity, gender) and an explicit description of the process. When computers are involved, determining disparate impact (and hence bias) is harder. It might not be possible to disclose the… CONTINUE READING
    680 Citations

    Figures and Topics from this paper.

    Explore Further: Topics Discussed in This Paper

    Encoding Fair Representations
    • Alexander Latenko
    • 2019
    Avoiding Disparate Impact with Counterfactual Distributions
    • 3
    • PDF
    Assessing algorithmic fairness with unobserved protected class using data combination
    • 15
    • PDF
    Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions
    • 20
    • Highly Influenced
    • PDF
    Evaluating Fairness Metrics in the Presence of Dataset Bias
    • 6
    • Highly Influenced
    • PDF
    A statistical framework for fair predictive algorithms
    • 43
    • PDF
    FlipTest: fairness testing via optimal transport
    • 5
    • PDF
    Avoiding Discrimination with Counterfactual Distributions
    • 1
    • Highly Influenced
    • PDF