Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches

  title={Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches},
  author={Colin Allen and Iva {\vS}mit and Wendell Wallach},
  journal={Ethics and Information Technology},
A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper is… 
Artificial morality: Making of the artificial moral agents
Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs
Top-down approach to compare the moral theories of deontology and utilitarianism in Pac-Man game setting
The processes underlying important decisions in many areas of our everyday lives are getting increasingly automatized. In the near future, as many decisions would be made by autonomous artificial
Artificial Moral Agents: Creative, Autonomous, Social. An Approach Based on Evolutionary Computation
A model of artificial normative agency that accommodates some crucial social competencies that the authors expect from artificial moral agents and shows how both VE and EC are more adequate to a “social approach” to AMA when compared to the standard approaches based on deontological or consequentialist models implemented through standard computational tools.
Computationally rational agents can be moral agents
The argument for computational rationality as an integrative element that effectively combines the philosophical and computational aspects of artificial moral agency logically leads to a philosophically coherent and scientifically consistent model for building artificial moral agents.
Implementation of Moral Uncertainty in Intelligent Machines
This paper argues that the proper response to the development of artificial intelligence is to design machines to be fundamentally uncertain about morality, and describes a computational framework for doing so that efficiently resolves common obstacles to the implementation of moral philosophy in intelligent machines.
Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency
This paper proposes a model of the Artificial Autonomous Moral Agent (AAMA), discusses a standard of moral cognition for AAMA, and compares it with other models of artificial normative agency. It is
Ethics by Agreement in Multi-agent Software Systems
It is argued that there will be more frequent cases where the moral responsibility for a situation will lie among multiple actors, and hence a designed approach will not suffice, and that an emergence-based approach offers a better alternative to designed approaches.
Making moral machines: why we need artificial moral agents
This paper develops a comprehensive analysis of the relevant arguments for and against creating AMAs, and argues that all things considered the authors have strong reasons to continue to responsibly develop AMAs.
On the Moral Equality of Artificial Agents1
The author developed a respect-based account of the ethical criteria for the moral status of persons and employs an empirical test that must be passed in order for artificial agents to be considered alongside persons as having the corresponding rights and duties.
Implementing moral decision making faculties in computers and robots
This paper will offer a brief overview of the many dimensions of this new field of inquiry, including machine ethics, machine morality, artificial morality, or computational morality.


Artificial Morality: Virtuous Robots for Virtual Games
From the Publisher: Artificial Morality shows how to build moral agents that succeed in competition with amoral agents. Peter Danielson's agents deviate from the received theory of rational choice.
Artificial Morality : Bounded Rationality , Bounded Morality and Emotions
A central question in the development and design of artificial moral agents is whether the absence of emotions, and the capacity of computer systems to manage large quantities of information and
Modeling Rationality, Morality, and Evolution
This collection focuses on questions that arise when morality is considered from the perspective of recent work on rational choice and evolution. Linking questions like "Is it rational to be moral?"
On the Morality of Artificial Agents
There is substantial and important scope, particularly in Computer Ethics, for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility, as well as the extension of the class of agents and moral agents to embrace AAs.
Prolegomena to any future artificial moral agent
The ethical disputes are surveyed, the possibility of a ‘moral Turing Test’ is considered and the computational difficulties accompanying the different types of approach are assessed.
Information ethics: On the philosophical foundation of computer ethics
  • L. Floridi
  • Philosophy
    Ethics and Information Technology
  • 2004
The essential difficulty about Computer Ethics' (CE) philosophical status is a methodological problem: standard ethical theories cannot easily be adapted to deal with CE-problems, which appear to
The evolution of cooperation.
A model is developed based on the concept of an evolutionarily stable strategy in the context of the Prisoner's Dilemma game to show how cooperation based on reciprocity can get started in an asocial world, can thrive while interacting with a wide range of other strategies, and can resist invasion once fully established.
Ethics and second-order cybernetics
I am touched by the generosity of the organizers of this conference who not only invited me to come to your glorious city of Paris, but also gave me the honor of opening the Plenary sessions with my
Computing Machinery and Intelligence.
Computing Machinery and Intelligence
  • A. Turing
  • Philosophy
    The Philosophy of Artificial Intelligence
  • 1990
If the meaning of the words “machine” and “think” are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll.