Enabling Trust with Behavior Metamodels

  • Scott A. Wallace
  • Published 2007 in Interaction Challenges for Intelligent Assistants

Abstract

Intelligent assistants promise to simplify our lives and increase our productivity. Yet for this promise to become reality, the Artificial Intelligence community will need to address two important issues. The first is how to determine that the assistants we build will, in fact, behave appropriately and safely. The second issue is how to convince society at large that these assistants are useful and reliable tools that should be trusted with important tasks. In this paper, we argue that both of these issues are be addressed by behavior metamodels (i.e., abstract models of how the agent behaves). Our argument is 1) based on experimental evidence of how metamodels can improve debugging/validation efficiency, and 2) based on how metamodels can contribute to three fundamental components of trusting relationships established in previous literature.

View Slides

Extracted Key Phrases

6 Figures and Tables

Cite this paper

@inproceedings{Wallace2007EnablingTW, title={Enabling Trust with Behavior Metamodels}, author={Scott A. Wallace}, booktitle={Interaction Challenges for Intelligent Assistants}, year={2007} }