Enchanted Determinism: Power without Responsibility in Artificial Intelligence

  title={Enchanted Determinism: Power without Responsibility in Artificial Intelligence},
  author={Alexander Campolo and Kate Crawford},
  journal={Engaging Science, Technology, and Society},
Deep learning techniques are growing in popularity within the field of artificial intelligence (AI). These approaches identify patterns in large scale datasets, and make classifications and predictions, which have been celebrated as more accurate than those of humans. But for a number of reasons, including nonlinear path from inputs to outputs, there is a dearth of theory that can explain why deep learning techniques work so well at pattern detection and prediction. Claims about “superhuman… 

Figures from this paper

“AI will fix this” – The Technical, Discursive, and Political Turn to AI in Governing Communication

The article positions the current turn to AI in the longstanding motif of the “technological fix” in the relationship between technology and society, and identifies a discursive turn to responsibility in platform governance as a key driver for AI and automation.

Turning biases into hypotheses through method: A logic of scientific discovery for machine learning

It is argued that bridging the gap in the understanding of ML models and their reasonableness requires a focus on developing an improved methodology for their creation, and suggests embedding ML in a general logic of scientific discovery similar to the one presented by Charles Sanders Peirce.

The Nooscope manifested: AI as instrument of knowledge extractivism

The assembly line of machine learning: data, algorithm, model, dataset, training dataset, and the social origins of machine intelligence.

Truth from the machine: artificial intelligence and the materialization of identity

ABSTRACT Critics now articulate their worries about the technologies, social practices and mythologies that comprise Artificial Intelligence (AI) in many domains. In this paper, we investigate the

A Conversational Interface for interacting with Machine Learning models

An approach whose main goal is to improve the ability of a ML system to explain its decisions, based on a conversational chatbot, and that the user can interact with and question the model regarding its predictions and gain an increased impact on the model, and a better understanding of how it works.

Prediction as Extraction of Discretion

I argue that data-driven predictions work primarily as instruments for systematic extraction of discretionary power – the practical capacity to make everyday decisions and define one's situation.

The Society of Algorithms

The rise of a new occupational class, which is called the coding elite, is discussed, which has consolidated power through their technical control over the digital means of production and by extracting labor from a newly marginalized or unpaid workforce, the cybertariat.

Causal Campbell-Goodhart's law and Reinforcement Learning

Through a simple example, it is shown how off-the-shelf deep Reinforcement Learning algorithms are not necessarily immune to Campbell-Goodhart's law, and that naive application of RL to complex real life problems can result in the same types of policy errors that humans make.

Think Differently We Must! An AI Manifesto for the Future

There is a problematic tradition of dualistic and reductionist thinking in artificial intelligence (AI) research, which is evident in AI storytelling and imaginations as well as in public debates

Dispositions towards automation: Capital, technology, and labour relations in aeromobilities

With the rapid rise of supercomputers, artificial intelligence, and advanced forms of robotics, recent years have seen a resurgence in interest in automation in the academy. In geography, scholars



The Intuitive Appeal of Explainable Machines

It is shown that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties.

Intriguing properties of neural networks

It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.

Explaining and Harnessing Adversarial Examples

It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.

How the machine ‘thinks’: Understanding opacity in machine learning algorithms

This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news

[89WashLRev0001] The Scored Society: Due Process for Automated Predictions

P Procedural regularity is essential for those stigmatized by “artificially intelligent” scoring systems, and regulators should be able to test scoring systems to ensure their fairness and accuracy.

Mastering the game of Go without human knowledge

An algorithm based solely on reinforcement learning is introduced, without human data, guidance or domain knowledge beyond game rules, that achieves superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.

The Disparate Effects of Strategic Manipulation

This paper adapts models of strategic manipulation to capture dynamics that may arise in a setting of social inequality wherein candidate groups face different costs to manipulation, and finds that whenever one group's costs are higher than the other's, the learner's equilibrium strategy exhibits an inequality-reinforcing phenomenon.

Deep Learning

Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.

On the Expressive Power of Deep Neural Networks

We propose a new approach to the problem of neural network expressivity, which seeks to characterize how structural properties of a neural network family affect the functions it is able to compute.

The Expressive Power of Neural Networks: A View from the Width

It is shown that there exist classes of wide networks which cannot be realized by any narrow network whose depth is no more than a polynomial bound, and that narrow networks whose size exceed the polynometric bound by a constant factor can approximate wide and shallow network with high accuracy.