Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing

@article{Raji2020ClosingTA,
  title={Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing},
  author={Inioluwa Deborah Raji and Andrew Smart and Rebecca N. White and Margaret Mitchell and Timnit Gebru and Ben Hutchinson and Jamila Smith-Loud and Daniel Theron and Parker Barnes},
  journal={Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency},
  year={2020}
}
Rising concern for the societal implications of artificial intelligence systems has inspired a wave of academic and journalistic literature in which deployed systems are audited for harm by investigators from outside the organizations deploying the algorithms. However, it remains challenging for practitioners to identify the harmful repercussions of their own systems prior to deployment, and, once deployed, emergent issues can become difficult or impossible to trace back to their source. In… 

Figures from this paper

Outsider Oversight: Designing a Third Party Audit Ecosystem for AI Governance
TLDR
It is concluded that the turn toward audits alone is unlikely to achieve actual algorithmic accountability, and sustained focus on institutional design will be required for meaningful third party involvement.
Towards a multi-stakeholder value-based assessment framework for algorithmic systems
TLDR
A value-based assessment framework that is not limited to bias auditing and that covers prominent ethical principles for algorithmic systems is developed, and it is argued that it is necessary to include stakeholders that present diverse standpoints to systematically negotiate and consolidate value and criteria tensions.
Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation
TLDR
This article describes and discusses the two primary enforcement mechanisms proposed in the AIA: the conformity assessments that providers of high-risk AI systems are expected to conduct, and the post-market monitoring plans that providers must establish to document the performance ofHigh- risk AI systems throughout their lifetimes.
AI audits for assessing design logics and building ethical systems: the case of predictive policing algorithms
TLDR
The paper uses the set of technologies known as predictive policing algorithms as a case example to illustrate how theoretical assumptions can pose adverse social consequences and should therefore be systematically evaluated during audits if the objective is to detect unknown risks, avoid AI harms, and build ethical systems.
Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems
TLDR
It is argued that a reviewability framework, drawing on administrative law's approach to reviewing human decision-making, offers a practical way forward towards more a more holistic and legally-relevant form of accountability for ADM.
Algorithmic Impact Assessments and Accountability: The Co-construction of Impacts
TLDR
It is found that there is a distinct risk of constructing algorithmic impacts as organizationally understandable metrics that are nonetheless inappropriately distant from the harms experienced by people, and which fall short of building the relationships required for effective accountability.
Building and Auditing Fair Algorithms: A Case Study in Candidate Screening
TLDR
A framework for algorithmic auditing is outlined by way of a case-study of pymetrics, a startup that uses machine learning to recommend job candidates to their clients, and recommendations are made on how to structure audits to be practical, independent, and constructive.
A relationship and not a thing: A relational approach to algorithmic accountability and assessment documentation
Central to a number of scholarly, regulatory, and public conversations about algorithmic accountability is the question of who should have access to documentation that reveals the inner workings,
Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims
TLDR
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems.
Ethics-Based Auditing of Automated Decision-Making Systems: Nature, Scope, and Limitations
TLDR
This article considers the feasibility and efficacy of ethics-based auditing (EBA) as a governance mechanism that allows organisations to validate claims made about their ADMS and concludes that EBA should be considered an integral component of multifaced approaches to managing the ethical risks posed by ADMS.
...
...

References

SHOWING 1-10 OF 106 REFERENCES
Algorithmic accountability
  • Hetan Shah
  • Political Science
    Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
  • 2018
TLDR
It is made a case that public sector bodies that hold datasets should be more confident in negotiating terms with the private sector, and that all regulators need to wake up to the challenges posed by changing technology.
Algorithmic Accountability
TLDR
The notion of algorithmic accountability reporting as a mechanism for elucidating and articulating the power structures, biases, and influences that computational artifacts exercise in society is studied.
Evolution of Auditing: From the Traditional Approach to the Future Audit1
TLDR
The purpose of this white paper is to discuss the evolution of auditing and the history of the traditional audit and to provide an improved understanding of movements that have taken and are taking place relative to technology such that readers might better envision how accountants will continue to be the assurance providers of choice in the evolving real-time global economy.
Artificial Intelligence: the global landscape of ethics guidelines
TLDR
A global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted; why they are deemed important; what issue, domain or actors they pertain to; and how they should be implemented.
Fairness and Abstraction in Sociotechnical Systems
TLDR
This paper outlines this mismatch with five "traps" that fair-ML work can fall into even as it attempts to be more context-aware in comparison to traditional data science and suggests ways in which technical designers can mitigate the traps through a refocusing of design in terms of process rather than solutions.
Principles alone cannot guarantee ethical AI
TLDR
Significant differences exist between medical practice and AI development that suggest a principled approach may not work in the case of AI, and Brent Mittelstadt highlights these differences.
Ethics and the Auditing Culture: Rethinking the Foundation of Accounting and Auditing
Although the foundation of financial accounting and auditing has traditionally been based upon a rule-based framework, the concept of a principle-based approach has been periodically advocated since
The Internal Audit Function: Perceptions of Internal Audit Roles, Effectiveness, and Evaluation
Purpose - The purpose of this paper is to provide insights into the current roles and responsibilities of the internal audit (IA) function and the factors perceived to be necessary to ensure its
Accountable Algorithms
Many important decisions historically made by people are now made by computers. Algorithms count votes, approve loan and credit card applications, target citizens or neighborhoods for police
A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics
TLDR
This position paper interprets technical bias as an epistemological problem and emergent bias as a dynamical feedback phenomenon and point to value-sensitive design methodologies to revisit the design and implementation process of automated decision-making systems.
...
...