• Corpus ID: 34735320

A Step Towards Accountable Algorithms ? : Algorithmic Discrimination and the European Union General Data Protection

@inproceedings{Goodman2016AST,
  title={A Step Towards Accountable Algorithms ? : Algorithmic Discrimination and the European Union General Data Protection},
  author={Bryce Goodman},
  year={2016}
}
Algorithms, and the data they process, play an increasingly important role in decisions with significant consequences for human welfare. This trend has given rise to calls for greater accountability in algorithm design and implementation, and concern over the emergence of algorithmic discrimination. In that spirit, this paper asks whether and to what extent the European Union’s recently adopted General Data Protection Regulation (GDPR) successfully addresses algorithmic discrimination. As the… 
Multi-layered explanations from algorithmic impact assessments in the GDPR
TLDR
It is argued that the impact assessment process plays a crucial role in connecting internal company heuristics and risk mitigation to outward-facing rights, and in forming the substance of several kinds of explanations.
Algorithmic Impact Assessments under the GDPR: Producing Multi-layered Explanations
TLDR
This paper addresses how a Data Protection Impact Assessment (DPIA) links the two faces of theGDPR’s approach to algorithmic accountability: individual rights and systemic collaborative governance, and calls for a Model Algorithmic Impact Assessment in the context of the GDPR.
Algorithms: transparency and accountability
  • Christina Blacklaws
  • Law
    Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
  • 2018
TLDR
The issues of accountability and transparency in relation to the growing use of machine learning algorithms and whether the legal system will be able to adapt to rapid technological change are explored.
Framework for developing algorithmic fairness
TLDR
A framework for defining a fair algorithm metric is proposed by compiling information and propositions from various papers into a single summarized list of fairness requirements (guideline alike) so that the researcher can adopt it as a foundation or reference to aid them in developing their interpretation of algorithmic fairness.
Bureaucracy as a Lens for Analyzing and Designing Algorithmic Systems
TLDR
This essay presents algorithms as analogous to impartial bureaucratic rules for controlling action, and argues that discretionary decision-making power in algorithmic systems accumulates at locations where uncertainty about the operation of algorithms persists.
Fair and Unbiased Algorithmic Decision Making: Current State and Future Challenges
TLDR
In the future, research in algorithmic decision making systems should be aware of data and developer biases and add a focus on transparency to facilitate regular fairness audits.
Accountability in AI: From Principles to Industry-specific Accreditation
TLDR
It is argued that the present ecosystem is unbalanced, with a need for improved transparency via AI explainability and adequate documentation and process formalisation to support internal audit, leading up eventually to external accreditation processes.
Ethics-Based Auditing of Automated Decision-Making Systems: Nature, Scope, and Limitations
TLDR
This article considers the feasibility and efficacy of ethics-based auditing (EBA) as a governance mechanism that allows organisations to validate claims made about their ADMS and concludes that EBA should be considered an integral component of multifaced approaches to managing the ethical risks posed by ADMS.
Ethics-Based Auditing of Automated Decision-Making Systems: Intervention Points and Policy Implications
TLDR
To support the emergence of feasible and effective EBA procedures, policymakers and regulators could provide standardised reporting formats, facilitate knowledge exchange, provide guidance on how to resolve normative tensions, and create an independent body to oversee EBA of ADMS.
Dirichlet uncertainty wrappers for actionable algorithm accuracy accountability and auditability
TLDR
This work proposes a wrapper that given a black-box model enriches its output prediction with a measure of uncertainty, and advocates for a rejection system that selects the more confident predictions, discarding those more uncertain, leading to an improvement in the trustability of the resulting system.
...
1
2
3
...

References

SHOWING 1-10 OF 42 REFERENCES
Big Data's Disparate Impact
Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with.
Certifying and Removing Disparate Impact
TLDR
This work links disparate impact to a measure of classification accuracy that while known, has received relatively little attention and proposes a test for disparate impact based on how well the protected class can be predicted from the other attributes.
European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation"
TLDR
It is argued that while this law will pose large challenges for industry, it highlights opportunities for computer scientists to take the lead in designing algorithms and evaluation frameworks which avoid discrimination and enable explanation.
How the machine ‘thinks’: Understanding opacity in machine learning algorithms
This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news
Discrimination-aware data mining
TLDR
This approach leads to a precise formulation of the redlining problem along with a formal result relating discriminatory rules with apparently safe ones by means of background knowledge, and an empirical assessment of the results on the German credit dataset.
Auditor Independence, Incomplete Contracts and the Role of Legal Liability
We develop a model in which there is conflict of interest between the management and the shareholders of an organization. Incompleteness of contracts prevents a simple contracting solution to this
A study of top-k measures for discrimination discovery
TLDR
To what extent the sets of top-k ranked rules with respect to any two pairs of measures agree is studied, including risk difference, risk ratio, odds ratio, and few others.
Data preprocessing techniques for classification without discrimination
TLDR
This paper surveys and extends existing data preprocessing techniques, being suppression of the sensitive attribute, massaging the dataset by changing class labels, and reweighing or resampling the data to remove discrimination without relabeling instances and presents the results of experiments on real-life data.
Machine Learning Forecasts of Risk to Inform Sentencing Decisions
TLDR
There is now a substantial and compelling literature instatistics and computer science showing that machinelearning statistical procedures will forecast at least as well and typically more accurately, than older approaches commonly derived from various forms of regression analysis.
...
1
2
3
4
5
...