Outlining Traceability: A Principle for Operationalizing Accountability in Computing Systems

  title={Outlining Traceability: A Principle for Operationalizing Accountability in Computing Systems},
  author={Joshua A. Kroll},
  journal={Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency},
  • Joshua A. Kroll
  • Published 23 January 2021
  • Computer Science
  • Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency
Accountability is widely understood as a goal for well governed computer systems, and is a sought-after value in many governance contexts. But how can it be achieved? Recent work on standards for governable artificial intelligence systems offers a related principle: traceability. Traceability requires establishing not only how a system worked but how it was created and for what purpose, in a way that explains why a system has particular dynamics or behaviors. It connects records of how the… 
System Cards for AI-Based Decision-Making for Public Policy
This work proposes a unifying framework for system accountability benchmark for formal audits of artificial intelligence-based decision-aiding systems in public policy as well as system cards that serve as scorecards presenting the outcomes of such audits.
The Many Facets of Trust in AI: Formalizing the Relation Between Trust and Fairness, Accountability, and Transparency
Efforts to promote fairness, accountability, and transparency are assumed to be critical in fostering Trust in AI (TAI), but extant literature is frustratingly vague regarding this “trust”. The lack
Crowdsourcing Impacts: Exploring the Utility of Crowds for Anticipating Societal Impacts of Algorithmic Decision Making
This work employs crowdsourcing as a means of participatory foresight to uncover four different types of impact areas based on a set of governmental algorithmic decision making tools and suggests that this method is effective at leveraging the cognitive diversity of the crowd to uncover a range of issues.
German AI Start-Ups and “AI Ethics”: Using A Social Practice Lens for Assessing and Implementing Socio-Technical Innovation
The current AI ethics discourse focuses on developing computational interpretations of ethical concerns, normative frameworks, and concepts for socio-technical innovation. There is less emphasis on
The Conflict Between Explainable and Accountable Decision-Making Algorithms
It is suggested that XAI systems that provide post-hoc explanations could be seen as blameworthy agents, obscuring the responsibility of developers in the decision-making process and a defense of hard regulation to prevent designers from escaping responsibility is suggested.
Transparency, Compliance, And Contestability When Code Is Law
. Both technical security mechanisms and legal processes serve as mechanisms to deal with misbehaviour according to a set of norms. While they share general similarities, there are also clear
Integrating Behavioral, Economic, and Technical Insights to Understand and Address Algorithmic Bias: A Human-Centric Perspective
This commentary argues that algorithmic bias is not just a technical problem, and its successful resolution requires deep insights into individual and organizational behavior, economic incentives, as well as complex dynamics of the sociotechnical systems in which the ADM models are embedded.
Towards a Standard for Identifying and Managing Bias in Artificial Intelligence
To successfully manage the risks of AI bias the authors must operationalize values and create new norms around how AI is built and deployed, according to experts in the area of Trustworthy and Responsible AI.
From transparency to accountability of intelligent systems: Moving beyond aspirations
Abstract A number of governmental and nongovernmental organizations have made significant efforts to encourage the development of artificial intelligence in line with a series of aspirational
Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning
This analysis brings together recent scholarship on relational accountability frameworks and discusses how the barriers present difficulties for instantiating a unified moral, relational framework in practice for data-driven algorithmic systems to uncover new challenges for accountability that these systems present.


Accountability in Computer Systems
Capturing human values such as fairness, privacy, and justice in software systems is challenging. Values are abstract and may be contested, or at least viewed differently by different stakeholders,
Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing
The proposed auditing framework is intended to contribute to closing the accountability gap in the development and deployment of large-scale artificial intelligence systems by embedding a robust process to ensure audit integrity.
A distributed requirements management framework for legal compliance and accountability
The Automated Administrative State: A Crisis of Legitimacy
The legitimacy of the administrative state is premised on our faith in agency expertise. Despite their extra-constitutional structure, administrative agencies have been on firm footing for a long
The fallacy of inscrutability
  • Joshua A. Kroll
  • Computer Science
    Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
  • 2018
It is argued that algorithms are fundamentally understandable pieces of technology, and that policy should not accede to the idea that some systems are of necessity inscrutable.
Open Code Governance
  • D. Citron
  • Computer Science, Political Science
  • 2008
In revealing the programmer's instructions to the computer, open code shines light on important regulatory choices currently hidden from both elected policy-makers and the public at large.
Towards Regulatory Compliance: Extracting Rights and Obligations to Align Requirements with Regulations
This work presents the methodology for extracting and prioritizing rights and obligations from regulations and shows how semantic models can be used to clarify ambiguities through focused elicitation and to balance rights with obligations.
A standard audit trail format
This report presents the author`s proposed format for a standard log record, which shows how and where the translation should be done, and demonstrates how log records from several disparate systems would be put into this format.
Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems.
What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability
A definition ofgorithmic accountability based on accountability theory and algorithmic accountability literature is provided, which pays extra attention to accountability risks in algorithmic systems.