The ethics of designing artificial agents

@article{Grodzinsky2008TheEO,
  title={The ethics of designing artificial agents},
  author={Frances S. Grodzinsky and Keith W. Miller and Marty J. Wolf},
  journal={Ethics and Information Technology},
  year={2008},
  volume={10},
  pages={115-121}
}
In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who… 
Un-making artificial moral agents
Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as
Robots: ethical by design
TLDR
This paper proposes to look at artificial moral agents as having functional responsibilities within a network of distributed responsibilities in a socio-technological system, and addresses the assurance of safety in modern High Reliability Organizations through responsibility distribution.
Human Goals Are Constitutive of Agency in Artificial Intelligence (AI)
The question whether AI systems have agency is gaining increasing importance in discussions of responsibility for AI behavior. This paper argues that an approach to artificial agency needs to be
A model of autonomy for artificial agents
An increasing amount of tasks and responsibilities are being delegated to artificial agents. In areas such as healthcare, traffic, the household, and the military, artificial agents are being adopted
Artificial moral agents: moral mentors or sensible tools?
  • Fabio Fossa
  • Philosophy
    Ethics and Information Technology
  • 2018
TLDR
It is argued that, although the Continuity Approach turns out to be a necessary postulate to the machine ethics project, the Discontinuity Approach highlights a relevant distinction between AMAs and human moral agents.
Do others mind? Moral agents without mental states
As technology advances and artificial agents (AAs) become increasingly autonomous, start to embody morally relevant values and act on those values, there arises the issue of whether these entities
Software Agents, Anticipatory Ethics, and Accountability
This chapter takes up a case study of the accountability issues around increasingly autonomous computer systems. In this early phase of their development, certain computer systems are being referred
Embedding Values in Artificial Intelligence (AI) Systems
  • I. Poel
  • Computer Science
    Minds Mach.
  • 2020
TLDR
An account for determining when an AI system can be said to embody certain values is proposed, which understands embodied values as the result of design activities intended to embed those values in such systems.
The Machine Question: Critical Perspectives on AI, Robots, and Ethics
One of the enduring concerns of moral philosophy is deciding who or what is deserving of ethical consideration. Much recent attention has been devoted to the "animal question"--consideration of the
Perspectives about artificial moral agents
TLDR
This empirical study explores controversial matters of Machine Ethics by surveying (AI) Ethics scholars with the aim of establishing a more coherent and informed debate about AMAs, and shows the wide breadth of viewpoints and approaches to artificial morality.
...
1
2
3
4
5
...

References

SHOWING 1-6 OF 6 REFERENCES
On the Morality of Artificial Agents
TLDR
There is substantial and important scope, particularly in Computer Ethics, for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility, as well as the extension of the class of agents and moral agents to embrace AAs.
Explanation Exploration: Exploring Emergent Behavior
TLDR
A new method for gathering insight intoEmergent behavior in simulations using the model adaptation technique, COERCE, which allows a user to efficiently adapt a model to meet new requirements and can be employed to explore emergent behavior exhibited in a simulation.
Emergent algorithms-a new method for enhancing survivability in unbounded systems
  • D. Fisher, H. Lipson
  • Computer Science
    Proceedings of the 32nd Annual Hawaii International Conference on Systems Sciences. 1999. HICSS-32. Abstracts and CD-ROM of Full Papers
  • 1999
TLDR
The need for and importance of survivability is discussed, unbounded network is defined, and the characteristics that differentiate survivability from other software quality attributes and nonfunctional properties of systems are examined.
A Study of Synthetic Creativity: Behavior Modeling and Simulation of an Ant Colony
TLDR
Two new creative types of foraging behavior are introduced and then, through computer simulation, their impact on performance is measured through innovative metric design.
Evaluation of safety-critical software
Methods and approaches for testing the reliability and trustworthiness of software remain among the most controversial issues facing this age of high technology. The authors present some of the
Intentionality. In Stanford Encyclopedia of Philosophy, 2003. http://plato.stanford.edu/entries/inten tionality
  • Accessed April
  • 2007