Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning

@article{Cooper2022AccountabilityIA,
  title={Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning},
  author={A. Feder Cooper and Benjamin Laufer and Emanuel Moss and Helen Nissenbaum},
  journal={2022 ACM Conference on Fairness, Accountability, and Transparency},
  year={2022}
}
In 1996, Accountability in a Computerized Society [95] issued a clarion call concerning the erosion of accountability in society due to the ubiquitous delegation of consequential functions to computerized systems. Nissenbaum [95] described four barriers to accountability that computerization presented, which we revisit in relation to the ascendance of data-driven algorithmic systems—i.e., machine learning or artificial intelligence—to uncover new challenges for accountability that these systems… 
Making the Unaccountable Internet: The Changing Meaning of Accounting in the Early ARPANET
Contemporary concerns over the governance of technological systems often run up against narratives about the technical infeasibility of designing mechanisms for accountability. While in recent AI
Non-Determinism and the Lawlessness of ML Code
TLDR
It is demonstrated that ML code falls outside of the cyberlaw frame of treating “code as law,” as this frame assumes that code is deterministic, and where the law must do work to bridge the gap between its current individual-outcome focus and the distributional approach that is recommended.
Achieving Downstream Fairness with Geometric Repair
TLDR
It is argued that fairer classification outcomes can be produced through the development of setting-speci fic interventions, and it is shown that attaining distributional parity minimizes rate disparities across all thresholds in the up/downstream setting.
Adversarial Scrutiny of Evidentiary Statistical Software
TLDR
This work defines and operationalize the notion of robust adversarial testing for defense use by drawing on a large body of recent work in robust machine learning and algorithmic fairness, and demonstrates how this framework both standardizes the process for scrutinizing such tools and empowers defense lawyers to examine their validity for instances most relevant to the case at hand.
Accounting for Offensive Speech as a Practice of Resistance
Tasks such as toxicity detection, hate speech detection, and online harassment detection have been developed for identifying interactions involving offensive speech. In this work we articulate the
Examining Responsibility and Deliberation in AI Impact Statements and Ethics Reviews
The artificial intelligence research community is continuing to grapple with the ethics of its work by encouraging researchers to discuss potential positive and negative consequences. Neural

References

SHOWING 1-10 OF 199 REFERENCES
Machine Learning Techniques for Accountability
TLDR
This short overview article is to begin the process of mapping the categories of methods that one could use to assess whether an AI system is meeting its objectives, and will not focus on any particular objective (such as safety, fairness, or robustness).
Algorithmic Accountability
TLDR
The notion of algorithmic accountability reporting as a mechanism for elucidating and articulating the power structures, biases, and influences that computational artifacts exercise in society is studied.
Fairness and Abstraction in Sociotechnical Systems
TLDR
This paper outlines this mismatch with five "traps" that fair-ML work can fall into even as it attempts to be more context-aware in comparison to traditional data science and suggests ways in which technical designers can mitigate the traps through a refocusing of design in terms of process rather than solutions.
Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing
TLDR
The proposed auditing framework is intended to contribute to closing the accountability gap in the development and deployment of large-scale artificial intelligence systems by embedding a robust process to ensure audit integrity.
A relationship and not a thing: A relational approach to algorithmic accountability and assessment documentation
Central to a number of scholarly, regulatory, and public conversations about algorithmic accountability is the question of who should have access to documentation that reveals the inner workings,
Accuracy-Efficiency Trade-Offs and Accountability in Distributed ML Systems
TLDR
This work describes how the trade-off takes shape for distributed machine learning systems, describes how such accountability mechanisms encourage more just, transparent governance aligned with public values, and highlights gaps between existing US risk assessment standards and what these systems require to be properly assessed.
The Automated Administrative State: A Crisis of Legitimacy
The legitimacy of the administrative state is premised on our faith in agency expertise. Despite their extra-constitutional structure, administrative agencies have been on firm footing for a long
Outlining Traceability: A Principle for Operationalizing Accountability in Computing Systems
TLDR
This map reframes existing discussions around accountability and transparency, using the principle of traceability to show how, when, and why transparency can be deployed to serve accountability goals and thereby improve the normative fidelity of systems and their development processes.
Binary Governance: Lessons from the GDPR's Approach to Algorithmic Accountability
  • M. Kaminski
  • Political Science
    SSRN Electronic Journal
  • 2019
Algorithms are now used to make significant decisions about individuals, from credit determinations to hiring and firing. But they are largely unregulated under U.S. law. A quickly growing literature
Algorithmic accountability in public administration: the GDPR paradox
TLDR
This paper examines the interplay of the fundamental guarantees of due process, judicial review, and equal treatment within the prospect of algorithmic decision-making by public authorities within the GDPR.
...
...