A Meta-Analysis of Factors Influencing the Development of Trust in Automation

  title={A Meta-Analysis of Factors Influencing the Development of Trust in Automation},
  author={Kristin E. Schaefer and Jessie Y.C. Chen and James L. Szalma and Peter A. Hancock},
  journal={Human Factors: The Journal of Human Factors and Ergonomics Society},
  pages={377 - 400}
Objective: We used meta-analysis to assess research concerning human trust in automation to understand the foundation upon which future autonomous systems can be built. Background: Trust is increasingly important in the growing need for synergistic human–machine teaming. Thus, we expand on our previous meta-analytic foundation in the field of human–robot interaction to include all of automation interaction. Method: We used meta-analysis to assess trust in automation. Thirty studies provided 164… 

Figures from this paper

Evolving Trust in Robots: Specification Through Sequential and Comparative Meta-Analyses

The present meta-analysis expands upon previous work and validates the overarching categories of trust antecedent (human- related, robot-related, and contextual), as well as identifying the significant individual precursors to trust within each category.

Individual differences in human–machine trust: A multi-study look at the perfect automation schema

The results of three different studies explored the relationship between the Perfect Automation Schema and trust in a human–machine context suggest that the PAS as measured by high expectations may be a fruitful construct for researchers in the domain of trust in automation.

Trust in Artificial Intelligence: Meta-Analytic Findings.

Overall, the results of this meta-analysis determined several factors that influence trust, including some that have no bearing on AI performance.

The Effect of Culture on Trust in Automation

This is the first set of studies that deal with cultural factors across all the cultural syndromes identified in the literature by comparing trust in the Honor, Face, Dignity cultures and a validated cross-cultural trust measure for measuring trust in automation, and these experiments are the first to study the dynamics of trust across cultures.

Modeling Trust in Human-Robot Interaction: A Survey

Different techniques and methods for trust modeling in HRI are reviewed, showing a list of potential directions for further research and some challenges that need to be addressed in future work on human-robot trust modeling.

An Examination of Dispositional Trust in Human and Autonomous System Interactions

This study conducted an online questionnaire to investigate the influence of personality traits, culture orientation, and individual differences on dispositional trust, as an effort to map out humans’ baseline trust in autonomous systems and identified that some individual differences presented stronger influence on the dispositionaltrust in automation.

The Role of Rejection within the Trust Calibration Process : Insights from a Mixed-Methods Human-Robot Teaming Experiment

A narrative account of the trust calibration process and the primacy of rejection relating to trust calibration was extracted and implications for the design of robotic systems and the training of human-robot teams are discussed.

Personal Influences on Dynamic Trust Formation in Human-Agent Interaction

It is found that users who exhibit higher trust propensities in humans also develop higher trust toward automated agents in initial stages, and offers opportunities to enhance the future design of automated agent systems.

Impact of Agent Reliability and Predictability on Trust in Real Time Human-Agent Collaboration

This work modelled the human-agent trust relationship and demonstrated that it is possible to reliably predict users' trust ratings using real-time interaction data.



A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction

Factors related to the robot itself, specifically, its performance, had the greatest current association with trust, and environmental factors were moderately associated; there was little evidence for effects of human-related factors.

A Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Human-Robot Interaction

Abstract : The purpose of this report is to review the current evidence relating to trust in human-automation interaction and to compare these findings with our prior research on the more specific

Trust in Automation

A three-layered trust model provides a new lens for conceptualizing the variability of trust in automation and can be applied to help guide future research and develop training interventions and design procedures that encourage appropriate trust.

Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation.

Two experiments are reported which examined operators' trust in and use of the automation in a simulated supervisory process control task and suggest that operators' subjective ratings of trust and the properties of the automate, can be used to predict and optimize the dynamic allocation of functions in automated systems.

Trust in Automation: Designing for Appropriate Reliance

This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives, and considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust.

Trust in automation. I: Theoretical issues in the study of trust and human intervention in automated systems

A model of human trust in machines is developed, taking models of trust between people as a starting point, and extending them to the human-machine relationship, providing a framework for experimental research on trust on trust a...

Intelligent Agent Transparency in Human–Agent Teaming for Multi-UxV Management

The results support the benefits of transparency for performance effectiveness without additional costs and will facilitate the implementation of IAs in military settings and will provide useful data to the design of heterogeneous UxV teams.

Not All Trust Is Created Equal: Dispositional and History-Based Trust in Human-Automation Interactions

The results suggest that increased specificity in the conceptualization and measurement of trust is required, future researchers should assess user perceptions of machine characteristics in addition to actual machine characteristics, and incorporation of user extraversion and propensity to trust machines can increase prediction of automation use decisions.

Human mismatches and preferences for automation

The research reported in this paper is concerned with gaining a better understanding of human factors issues in machining and the automation of manufacturing tasks. Mismatches between operators'