• Corpus ID: 54018325

When Do We Need a Human? Anthropomorphic Design and Trustworthiness of Conversational Agents

@inproceedings{Seeger2017WhenDW,
  title={When Do We Need a Human? Anthropomorphic Design and Trustworthiness of Conversational Agents},
  author={Anna-Maria Seeger and Jella Pfeiffer and Armin Heinzl},
  year={2017}
}
Conversational agents interact with users via the most natural interface: human language. A prerequisite for their successful diffusion across use cases is user trust. Following extant research, it is reasonable to assume that increasing the human-likeness of conversational agents represents an effective trust-inducing design strategy. The present article challenges this assumption by considering an opposing theoretical perspective on the human-agent trust-relationship. Based on an extensive… 

Figures from this paper

Do you Feel a Connection? How Human-like Design of Conversational Agents Influences Donation Behavior

How should a CA for raising funds for non-profit organizations be designed and how does human-like design of a CA influence the user's donation behavior are explored.

Design for Fast Request Fulfillment or Natural Interaction? Insights from an Experiment with a Conversational Agent

The results show that text-based conversational agents use reduces perceived humanness and social presence yet does not significantly increase service satisfaction, and indicates that preset answer options might even be detrimental to service satisfaction as they diminish the natural feel of human-CA interaction.

Designing Anthropomorphic Enterprise Conversational Agents

The presentation of the artifact and the synthesis of prescriptive knowledge in the form of a nascent design theory for anthropomorphic enterprise CAs adds to the growing knowledge base for designing human-like assistants and supports practitioners seeking to introduce them into their organizations.

On the Design of and Interaction with Conversational Agents: An Organizing and Assessing Review of Human-Computer Interaction Research

An overview of the status quo of CA research is contributed, four research streams are identified through cluster analysis, and a research agenda comprising six avenues and sixteen directions to move the field forward is proposed.

How perceptions of intelligence and anthropomorphism affect adoption of personal intelligent agents

Drawing on research in IS and Artificial Intelligence, this work builds and test a model of user adoption of PIAs leveraging their uique characteristics and confirms that both perceived intelligence and anthropomorphism are significant antecedents of PIA adoption.

Perceptions on Authenticity in Chat Bots

Results suggest that showcasing a transparent purpose, learning from experience, anthropomorphizing, human-like conversational behavior, and coherence, are guiding characteristics for agent authenticity and should consequently allow for and support a better coexistence of artificial intelligence technology with its respective users.

Perspectives on Socially Intelligent Conversational Agents

The results of a Delphi study highlighting the respective opinions of 21 multi-disciplinary domain experts exhibit 14 distinctive characteristics of social intelligence, grouped into different levels of consensus, maturity, and abstraction, which may be considered a relevant basis, assisting the definition and consequent development of socially intelligent conversational agents.

The Moral Integrity Corpus: A Benchmark for Ethical Dialogue Systems

The Moral Integrity Corpus, MIC, is a resource, which captures the moral assumptions of 38k prompt-reply pairs, using 99k distinct Rules of Thumb (RoTs), and is suggested that MIC will be a useful resource for understanding and language models’ implicit moral assumptions and flexibly benchmarking the integrity of conversational agents.

Alexa, Are You Human? Investigating Anthropomorphism of Digital Voice Assistants - A Qualitative Approach

Digital voice assistants, often associated with artificial intelligence in integrated applications or as a stationary stand-alone speaker, are on the rise. By integrating humanlike characteristics,

References

SHOWING 1-10 OF 26 REFERENCES

Almost human: Anthropomorphism increases trust resilience in cognitive agents.

Results showed that anthropomorphic agents were associated with greater trust resilience, a higher resistance to breakdowns in trust, and that these effects were magnified by greater uncertainty; and that incorporating human-like trust repair behavior largely erased differences between the agents.

External manifestations of trustworthiness in the interface

It is argued that interaction rituals among humans, such as greetings, small talk and conventional leavetakings, along with their manifestations in speech and in embodied conversational behaviors, can lead the users of technology to judge the technology as more reliable, competent and knowledgeable – to trust the technology more.

Similarities and differences between human–human and human–automation trust: an integrative review

The trust placed in diagnostic aids by the human operator is a critical psychological factor that influences operator reliance on automation. Studies examining the nature of human interaction with

On seeing human: a three-factor theory of anthropomorphism.

A theory to explain when people are likely to anthropomorphize and when they are not is described, focused on three psychological determinants--the accessibility and applicability of anthropocentric knowledge, the motivation to explain and understand the behavior of other agents, and the desire for social contact and affiliation.

Evaluating Anthropomorphic Product Recommendation Agents: A Social Relationship Perspective to Designing Information Systems

The findings from a laboratory experiment indicate that using humanoid embodiment and human voice-based communication significantly influences users' perceptions of social presence, which in turn enhances users' trusting beliefs, perceptions of enjoyment, and ultimately, their intentions to use the agent as a decision aid.

Embodied Conversational Agent-Based Kiosk for Automated Interviewing

An automated kiosk that uses embodied intelligent agents to interview individuals and detect changes in arousal, behavior, and cognitive effort by using psychophysiological information systems was created and smiling agents were perceived as more likable than neutral demeanor agents.

Trusting Humans and Avatars: A Brain Imaging Study Based on Evolution Theory

The major implication of this study is that although interaction on the Internet may have benefits, the lack of real human faces in communication may serve to reduce these benefits, in turn leading to reduced levels of collaboration effectiveness.

The Effects of Personalizaion and Familiarity on Trust and Adoption of Recommendation Agents

A trust-centered, cognitive and emotional balanced perspective to study RA adoption is taken, and the effects of perceived personalization and familiarity on cognitive trust and emotional trust in an RA are examined.

Computers are social actors

Five experiments provide evidence that individuals’ interactions with computem are fundamentally social, and show that social responses to computers are not the result of conscious beliefs that computers are human or human-like.

Attributions of Trust in Decision Support Technologies: A Study of Recommendation Agents for E-Commerce

This study identifies six reasons users trust (or do not trust) a technology in the early stages of its use by extending the theories of trust formation in interpersonal and organizational contexts to that of decision support technologies.