• Corpus ID: 9481286

SLAVE TO THE ALGORITHM ? WHY A ‘ RIGHT TO AN EXPLANATION ’ IS PROBABLY NOT THE REMEDY YOU ARE LOOKING FOR

@inproceedings{Eale2017SLAVETT,
  title={SLAVE TO THE ALGORITHM ? WHY A ‘ RIGHT TO AN EXPLANATION ’ IS PROBABLY NOT THE REMEDY YOU ARE LOOKING FOR},
  author={M Ichael V Eale},
  year={2017}
}
Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias… 

Figures from this paper

Enslaving the Algorithm: From a “Right to an Explanation” to a “Right to Better Decisions”?

TLDR
This work outlines recent debates on the limited provisions in European data protection law, and introduces and analyze newer explanation rights in French administrative law and the draft modernized Council of Europe Convention 108.

Black box algorithms and the rights of individuals: no easy solution to the "explainability" problem

TLDR
The aim of this article is to demonstrate that the interpretability of “black box” machine cannot be considered a sufficient reason to completely abandon this legal safeguard.

Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts

TLDR
It is shown that post-hoc explanation algorithms are unsuitable to achieve the transparency objectives inherent to the legal norms, and there is a need to more explicitly discuss the objectives underlying “explainability” obligations as these can often be better achieved through other mechanisms.

Algorithms that remember: model inversion attacks and data protection law

TLDR
Recent work from the information security literature around ‘model inversion’ and ‘membership inference’ attacks is presented, which indicates that the process of turning training data into machine-learned systems is not one way, and how this could lead some models to be legally classified as personal data.

Accountable Artificial Intelligence: Holding Algorithms to Account

  • M. Busuioc
  • Computer Science
    Public administration review
  • 2021
TLDR
Drawing on a decidedly public administration perspective, and given the current challenges that have thus far become manifest in the field, the implications of these systems, and the limitations they pose, for public accountability are mapped out.

Deep Automation Bias: How to Tackle a Wicked Problem of AI?

TLDR
The role of automation is highlighted and why deep automation bias (DAB) is a metarisk of AI, and a heuristic model for assessing DAB-related risks in AI systems is developed.

To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods

TLDR
This paper addresses the problem of identifying a clear and unambiguous set of metrics for the evaluation of Local Linear Explanations and includes both existing and novel metrics defined specifically for this class of explanations, included in an open Python framework, named LEAF.

Impossible Explanations?: Beyond explainable AI in the GDPR from a COVID-19 use case scenario

TLDR
The current inability of complex, deep learning based machine learning models to make clear causal links between input data and final decisions represents a limitation for providing exact, human-legible reasons behind specific decisions, making the provision of satisfactorily, fair and transparent explanations a serious challenge.

The hidden assumptions behind counterfactual explanations and principal reasons

TLDR
It is demonstrated that the utility of feature-highlighting explanations relies on a number of easily overlooked assumptions, including that the recommended change in feature values clearly maps to real-world actions, that features can be made commensurate by looking only at the distribution of the training data, and that features are only relevant to the decision at hand.
...