European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation"

@article{Goodman2017EuropeanUR,
  title={European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation"},
  author={Bryce Goodman and Seth Flaxman},
  journal={AI Mag.},
  year={2017},
  volume={38},
  pages={50-57}
}
We summarize the potential impact that the European Union’s new General Data Protection Regulation will have on the routine use of machine learning algorithms. Slated to take effect as law across the EU in 2018, it will restrict automated individual decision-making (that is, algorithms that make decisions based on user-level predictors) which “significantly affect” users. The law will also effectively create a “right to explanation,” whereby a user can ask for an explanation of an algorithmic… 

Figures from this paper

Understanding algorithmic decision-making : Opportunities and challenges

  • Computer Science
  • 2019
The purpose of this policy options briefing is to highlight the main challenges and suggested policy options to allow society to benefit from the tremendous possibilities of ADS while limiting the risks related to their use.

Identifying Sources of Discrimination Risk in the Life Cycle of Machine Intelligence Applications under New European Union Regulations

This work outlines two key sources of bias in machine intelligence applications that have become imminent in light of new General Data Protection Regulations introduced by the European Union in April 2016 and some strategies for mitigating those biases to mitigate the risks for public.

Effects of Algorithmic Decision-Making and Interpretability on Human Behavior: Experiments using Crowdsourcing

A large scale study using crowd-sourcing as a means to measure how interpretability affects human-decision making using well understood principles of behavioral economics is undertaken, the first of its kind of an inter-disciplinary study involving interpretability in ADM models.

Do algorithms rule the world? Algorithmic decision-making and data protection in the framework of the GDPR and beyond

  • M. Brkan
  • Law
    Int. J. Law Inf. Technol.
  • 2019
The article argues that the GDPR obliges the controller to inform the data subject of the reasons why an automated decision was taken and argues that such a right would in principle fit well within the broader framework of theGDPR's quest for a high level of transparency.

Algorithmic Decision-Making and the Problem of Control

It is argued that relevant loss of control might shape the motivational structure of decision-makers in a way that is ethically problematic, and associated costs stemming from the loss of Control might yet make delegating high-stakes decisions to learning algorithms ethically questionable.

Automated Decisions Based on Profiling: Information, Explanation or Justification – That Is The Question!

The article uses multidisciplinary sources from regulatory studies, law, and computer science to understand the multifaceted implications of algorithm accountability on the protection of personal data and the expectations that individuals may have thereof.

AI-supported decision-making under the general data protection regulation

The purpose of this paper is to analyse the rules of the General Data Protection Regulation on automated decision making in the age of Big Data and to explore how to ensure transparency of such

Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems

It is argued that a reviewability framework, drawing on administrative law's approach to reviewing human decision-making, offers a practical way forward towards more a more holistic and legally-relevant form of accountability for ADM.

Futility of a Right to Explanation

This paper argues that this Right to Explanation for people subjected to automated decision making would be very difficult to implement due to technical challenges, and proposes instead an external evaluation of classification models with respect to their correctness and fairness.

Right to an Explanation Considered Harmful

Lay and professional reasoning has it that newly introduced data protection regulation in Europe – GDPR – mandates a ‘right to an explanation’. This has been read as requiring that the machine
...

References

SHOWING 1-10 OF 48 REFERENCES

EU regulations on algorithmic decision-making and a "right to explanation"

It is argued that while this law will pose large challenges for industry, it highlights opportunities for machine learning researchers to take the lead in designing algorithms and evaluation frameworks which avoid discrimination.

Algorithmic Transparency via Quantitative Input Influence

A family of Quantitative Input Influence measures that capture the degree of input influence on system outputs provide a foundation for the design of transparency reports that accompany system decisions and for testing tools useful for internal and external oversight.

Better decision support through exploratory discrimination-aware data mining: foundations and empirical evidence

This article discusses the relative merits of constraint-oriented and exploratory DADM from a conceptual viewpoint and considers the case of loan applications to empirically assess the fitness of both discrimination-aware data mining approaches for two of their typical usage scenarios: prevention and detection.

Certifying and Removing Disparate Impact

This work links disparate impact to a measure of classification accuracy that while known, has received relatively little attention and proposes a test for disparate impact based on how well the protected class can be predicted from the other attributes.

Three naive Bayes approaches for discrimination-free classification

Three approaches for making the naive Bayes classifier discrimination-free are presented: modifying the probability of the decision being positive, training one model for every sensitive attribute value and balancing them, and adding a latent variable to the Bayesian model that represents the unbiased label and optimizing the model parameters for likelihood using expectation maximization.

How the machine ‘thinks’: Understanding opacity in machine learning algorithms

This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news

The new profiling: Algorithms, black boxes, and the failure of anti-discriminatory safeguards in the European Union

With pattern-based categorizations in data-driven profiling, safeguards such as the Charter of Fundamental Rights of the European Union or the EU data-protection framework essentially lose their applicability, leading to a diminishing role of the tools of the anti-discrimination framework.

Exploring Discrimination: A User-centric Evaluation of Discrimination-Aware Data Mining

In a user study administered via Mechanical Turk, it is shown that tools such as DCUBE-GUI can successfully assist novice users in exploring discrimination in data mining.

Big Data's Disparate Impact

Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with.

Judgment under Uncertainty: Heuristics and Biases

Three heuristics that are employed in making judgements under uncertainty are described: representativeness, availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development.