Ethical Assurance: A practical approach to the responsible design, development, and deployment of data-driven technologies

  title={Ethical Assurance: A practical approach to the responsible design, development, and deployment of data-driven technologies},
  author={Christopher Burr and David Leslie},
This article offers several contributions to the interdisciplinary project of responsible research and innovation in data science and AI. First, it provides a critical analysis of current efforts to establish practical mechanisms for algorithmic assessment, which are used to operationalise normative principles, such as sustainability, accountability, transparency, fairness, and explainability, in order to identify limitations and gaps with the current approaches. Second, it provides an… 

Figures from this paper


Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing
The proposed auditing framework is intended to contribute to closing the accountability gap in the development and deployment of large-scale artificial intelligence systems by embedding a robust process to ensure audit integrity.
Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems.
A Review of the ICO’s Draft Guidance on the AI Auditing Framework
This work summarise and critically evaluated each section of the draft guidance on the AI auditing framework, offering feed-back in line with the call for consultation, and presents general recommendations.
Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems
It is argued that a reviewability framework, drawing on administrative law's approach to reviewing human decision-making, offers a practical way forward towards more a more holistic and legally-relevant form of accountability for ADM.
Safety Cases: An Impending Crisis?
Safety cases have long been required by many safety standards and guidelines. Particularly in the UK, new systems in key sectors such as defence, nuclear and rail need a safety case before they can
Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability
This article critically interrogates the ideal of transparency, traces some of its roots in scientific and sociotechnical epistemological cultures, and sketches an alternative typology of algorithmic accountability grounded in constructive engagements with the limitations of transparency ideals.
The Arc of the Data Scientific Universe
In this paper explore the scaffolding of normative assumptions that supports Sabina Leonelli’s implicit appeal to the values of epistemic integrity and the global public good that conjointly animate
Developing a framework for responsible innovation
Increasing Trust in AI Services through Supplier's Declarations of Conformity
This paper envisiones an SDoC for AI services to contain purpose, performance, safety, security, and provenance information to be completed and voluntarily released by AI service providers for examination by consumers.