Corpus ID: 59336269

A Framework for Understanding Unintended Consequences of Machine Learning

  title={A Framework for Understanding Unintended Consequences of Machine Learning},
  author={H. Suresh and J. Guttag},
  • H. Suresh, J. Guttag
  • Published 2019
  • Computer Science, Mathematics
  • ArXiv
  • As machine learning increasingly affects people and society, it is important that we strive for a comprehensive and unified understanding of potential sources of unwanted consequences. For instance, downstream harms to particular groups are often blamed on "biased data," but this concept encompass too many issues to be useful in developing solutions. In this paper, we provide a framework that partitions sources of downstream harm in machine learning into six distinct categories spanning the… CONTINUE READING
    68 Citations
    Bias in machine learning - what is it good for?
    • Highly Influenced
    • PDF
    Bias in Machine Learning What is it Good (and Bad) for?
    • Highly Influenced
    Fairness in Machine Learning: A Survey
    • 1
    • PDF
    Lessons from archives: strategies for collecting sociocultural data in machine learning
    • 23
    • PDF
    A Survey on Bias and Fairness in Machine Learning
    • 194
    • Highly Influenced
    • PDF
    To Split or Not to Split: The Impact of Disparate Treatment in Classification
    • 3
    • PDF


    A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics
    • 11
    • PDF
    Learning Tasks for Multitask Learning: Heterogenous Patient Populations in the ICU
    • 27
    • PDF
    Decoupled classifiers for fair and efficient machine learning
    • 22
    • PDF
    The Myth in the Methodology: Towards a Recontextualization of Fairness in Machine Learning
    • 25
    • PDF
    On formalizing fairness in prediction with machine learning
    • 67
    • PDF
    Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment
    • 484
    • PDF
    Model Cards for Model Reporting
    • 220
    • Highly Influential
    • PDF
    Certifying and Removing Disparate Impact
    • 708
    • PDF
    Fairness and Abstraction in Sociotechnical Systems
    • 154
    • PDF