Learn More
As machines become capable of more autonomous and intelligent behavior, will they also display more morally desirable behavior? Earth's history tends to suggest that increasing intelligence, knowledge, and rationality will result in more cooperative and benevolent behavior. Animals with sophisticated nervous systems track and punish exploitative behavior ,(More)
Some researchers in the field of machine ethics have suggested consequentialist or utilitarian theories as organizing principles for Artificial Moral Agents (AMAs) (Wallach, Allen, and Smit 2008) that are 'full ethical agents' (Moor 2006), while acknowledging extensive variation among these theories as a serious challenge (Wallach, Allen, and Smit 2008).(More)
The developing academic field of machine ethics seeks to make artificial agents safer as they become more pervasive throughout society. Motivated by planned next-generation robotic systems, machine ethics typically explores solutions for agents with autonomous capacities intermediate between those of current artificial agents and humans, with designs(More)
A number of commentators have argued that some time in the 21st century humanity will develop generally intelligent software programs at least as capable as skilled humans, whether designed ab initio or as emulations of human brains, and that such entities will launch an extremely rapid technological transformation as they design their own successors. The(More)
In 1965, I. J. Good proposed that machines would one day be smart enough to make themselves smarter. Having made themselves smarter, they would spot still further opportunities for improvement, quickly leaving human intelligence far behind (Good 1965). He called this the " intelligence explosion. " Later authors have called it the " technological(More)
This paper presents a simple model of an AI (artificial intelligence) arms race, where several development teams race to build the first AI. Under the assumption that the first AI will be very powerful and transformative, each team is incentivised to finish first—by skimping on safety precautions if need be. This paper presents the Nash equilibrium of this(More)
Several authors have made the argument that because blind evolutionary processes produced human intelligence on Earth, it should be feasible for clever human engineers to create human-level artificial intelligence in the not-too-distant future. This evolutionary argument, however, has ignored the observation selection effect that guarantees that observers(More)