Principles for new ASI Safety Paradigms

  title={Principles for new ASI Safety Paradigms},
  author={Erland Wittkotter and Roman Yampolskiy},
Artificial Superintelligence (ASI) that is invulnerable, immortal, irreplaceable, unrestricted in its powers, and above the law is likely persistently uncontrollable. The goal of ASI Safety must be to make ASI mortal, vulnerable, and law-abiding. This is accomplished by having (1) features on all devices that allow killing and eradicating ASI, (2) protect humans from being hurt, damaged, blackmailed, or unduly bribed by ASI, (3) preserving the progress made by ASI, including offering ASI to… 



Superintelligence cannot be contained: Lessons from Computability Theory

This article traces the origins and development of the neo-fear of superintelligence, and some of the major proposals for its containment, arguing that such containment is, in principle, impossible, due to fundamental limits inherent to computing itself.

Towards an effective transnational regulation of AI

The article encapsulate its analysis in a list of both doctrinal and normative principles that should underpin any regulation aimed at AI machines that compares three transnational options to implement the proposed regulatory approach.

The Off-Switch Game

It is concluded that giving machines an appropriate level of uncertainty about their objectives leads to safer designs, and it is argued that this setting is a useful generalization of the classical AI paradigm of rational agents.

WaC: Trustworthy Encryption and Communication in an IT Ecosystem with Artificial Superintelligence

The proposed solution is a hardware component with Key-Safe and an associated Encryption/Decryption Unit for processing data that will not allow any key, in particular not the public key to be in cleartext outside the Key- safe, if ASI was able to breach the hardware protection around the keys.

The Basic AI Drives

This paper identifies a number of “drives” that will appear in sufficiently advanced AI systems of any design and discusses how to incorporate these insights in designing intelligent technology which will lead to a positive future for humanity.

Safely Interruptible Agents

This paper explores a way to make sure a learning agent will not learn to prevent being interrupted by the environment or a human operator, and provides a formal definition of safe interruptibility and exploit the off-policy learning property to prove that either some agents are already safely interruptible, like Q-learning, or can be made so, like Sarsa.

Artificial General Intelligence

The AGI containment problem is surveyed – the question of how to build a container in which tests can be conducted safely and reliably, even on AGIs with unknown motivations and capabilities that could be dangerous.

Artificial Intelligence and Law: An Overview

Much has been written recently about artificial intelligence (AI) and law. But what is AI, and what is its relation to the practice and administration of law? This article addresses those questions

Industrial Society and Its Future

1. The Industrial Revolution and its consequences have been a disaster for the human race. They have greatly increased the life-expectancy of those of us who live in “advanced” countries, but they

On Controllability of AI

Consequences of uncontrollability of AI are discussed with respect to future of humanity and research on AI, and AI safety and security.