The Illusion of Agency: The Influence of the Agency of an Artificial Agent on Its Persuasive Power

  title={The Illusion of Agency: The Influence of the Agency of an Artificial Agent on Its Persuasive Power},
  author={Cees J. H. Midden and Jaap Ham},
Artificial social agents can influence people. However, artificial social agents are not real humans, and people may ascribe less agency to them. Would the persuasive power of a social robot diminish when people ascribe only little agency to it? To investigate this question, we performed an experiment in which participants performed tasks on a washing machine and received feedback from a robot about their energy consumption (e.g., "Your energy consumption is too high"), or factual, non-social… 
Lonely and Susceptible: The Influence of Social Exclusion and Gender on Persuasion by an Artificial Agent
Results did not support the expectation that socially excluded people ascribe more human-likeness to an artificial agent, but they did show the expected effects on behavior change, indicating the importance of including a person’s psychological state in the design of human–agent interactions.
Sacrifice One For the Good of Many? People Apply Different Moral Norms to Human and Robot Agents
The first comparison of people's moral judgments about human and robot agents is reported, finding that robots, compared with human agents, were more strongly expected to take an action that sacrifices one person for the good of many (a “utilitarian” choice), and they were blamed more than their human counterparts when they did not make that choice.
Shall I Show You Some Other Shirts Too? The Psychology and Ethics of Persuasive Robots
  • Jaap Ham, A. Spahn
  • Psychology
    A Construction Manual for Robots' Ethical Systems
  • 2015
The current chapter brings together psychological and ethical expertise to investigate how persuasive robots can influence human behaviour and thinking in a way that is morally acceptable and psychologically effective.
Conforming to an Artificial Majority: Persuasive Effects of a Group of Artificial Agents
It is argued that conformity effects could occur not only with human majorities, but also with artificial majorities consisting of smart agents or computers, and that applying majorities of artificial agents opens up a new domain of persuasive technology.
Cheating with robots: how at ease do they make us feel?
Investigating whether people will cheat while in the presence of a robot and to what extent this depends on the role the robot plays found that participants cheated significantly more than chance when they were alone or with the robot giving instructions.
A Bayesian Analysis of Moral Norm Malleability during Clarification Dialogues
A preliminary Bayesian analysis of empirical data is presented suggesting that the architectural status quo of clarification request generation systems may cause robots to unintentionally miscommunicate their ethical intentions and weaken humans’ contextual application of moral norms.
AI in the Sky: How People Morally Evaluate Human and Machine Decisions in a Lethal Strike Dilemma
Even though morally competent artificial agents have yet to emerge in society, we need insights from empirical science into how people will respond to such agents and how these responses should
Motions of Robots Matter! The Social Effects of Idle and Meaningful Motions
The results indicate that social responses increase with the level of social verification in line with the threshold model of social influence, and this model is applied to human-robot interaction.
Comparing Strategies for Robot Communication of Role-Grounded Moral Norms
The results suggest two major findings: reflective exercises may increase the efficacy of role-based moral language and opportunities for moral practice following robots' use of moral language may facilitate role-centered moral cultivation.
Blame-Laden Moral Rebukes and the Morally Competent Robot: A Confucian Ethical Perspective
It is argued that a truly socially integrated and morally competent robot must be willing to communicate its objection to humans’ proposed violations of shared norms by using strategies such as blame-laden rebukes, even if doing so may violate other standing norms, such as politeness.


When Artificial Social Agents Try to Persuade People: The Role of Social Agency on the Occurrence of Psychological Reactance
Expect a positive relationship between the level of social agency of the source of a persuasive message and the amount of psychological reactance the message arouses and the results confirmed earlier research about the effects of controlling language on Psychological reactance.
Social influence of a persuasive agent: the role of agent embodiment and evaluative feedback
Overall, for men it did not matter whether the feedback was given by a computer or by an embodied agent, but for women it did: women who interact with the embodied agent used less energy than women who interacted with the computer.
Virtual Humans and Persuasion: The Effects of Agency and Behavioral Realism
Two studies examined whether participant attitudes would change toward positions advocated by an ingroup member even if the latter was known to be an embodied agent; that is, a human-like
The Effect of the Agency and Anthropomorphism on Users' Sense of Telepresence, Copresence, and Social Presence in Virtual Environments
The results support the prediction that people respond socially to both human and computer-controlled entities, and that the existence of a virtual image increases tele-presence.
What Makes Social Feedback from a Robot Work? Disentangling the Effect of Speech, Physical Appearance and Evaluation
Overall, the current research suggests that the addition of only one social cue is sufficient to enhance the persuasiveness of evaluative feedback, while combining both cues will not further enhance persuadeasiveness.
A robot that says “bad!”: Using negative and positive social feedback from a robotic agent to save energy
  • Jaap Ham, C. Midden
  • Psychology
    2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI)
  • 2009
Results indicate stronger persuasive effects of social feedback than of factual feedback or factual evaluative feedback, and of negative feedback more than of positive feedback.
Machines and Mindlessness: Social Responses to Computers
Following Langer (1992), this article reviews a series of experimental studiesthat demonstrate that individuals mindlessly apply social rules and expecta-tions to computers. The first set of studies
Silicon sycophants: the effects of computers that flatter
The study concludes that the effects of flattery from a computer can produce the same general effects asFlattery from humans, as described in the psychology literature.