Unpacking Human Teachers’ Intentions for Natural Interactive Task Learning

  title={Unpacking Human Teachers’ Intentions for Natural Interactive Task Learning},
  author={Preeti Ramaraj and Charlie Ortiz and Matthew Evans Klenk and Shiwali Mohan},
  journal={2021 30th IEEE International Conference on Robot \& Human Interactive Communication (RO-MAN)},
  • P. Ramaraj, C. Ortiz, Shiwali Mohan
  • Published 12 February 2021
  • Computer Science
  • 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)
Interactive Task Learning (ITL) is an emerging research agenda that studies the design of complex intelligent robots that can acquire new knowledge through natural human teacher-robot learner interactions. ITL methods are particularly useful for designing intelligent robots whose behavior can be adapted by humans collaborating with them. Various research communities are contributing methods for ITL and a large subset of this research is robot-centered with a focus on developing algorithms that… 

Figures from this paper

A Cognitive Framework for Delegation Between Error-Prone AI and Human Agents
The use of cognitively inspired models of behavior is investigated, and the predicted behavior is used to delegate control between humans and AI agents through the use of an intermediary entity, which allows overcoming potential shortcomings of either humans or agents in the pursuit of a goal.
Modeling Human Behavior Part I - Learning and Belief Approaches
There is a clear desire to model and comprehend human behavior. Trends in research covering this topic show a clear assumption that many view human reasoning as the presupposed standard in artificial
Multimodal Robot Programming by Demonstration: A Preliminary Exploration
A preliminary study on multimodal kinesthetic demonstrations and future directions for using multi-modal demonstrations to enhance robot learning and user programming experiences are described.


  • Dynamic intention structures I: a theory of intention representation. Autonomous Agents and Multi-Agent Systems 16
  • 2008
modern conference and exhibition complex, Brussels Expo, and we anticipate that the focus on local actions will attract a record attendance. We can also promise a greatly expanded Conference format,
Collaborative Plans for Complex Group Action
Characterizing an Analogical Concept Memory for Newellian Cognitive Architectures
A new long-term declarative memory for Soar that leverages the computational models of analogical reasoning and generalization and it is demonstrated that the learning methods implemented in the proposed memory can quickly learn a diverse types of novel concepts that are useful in task execution.
Interactive Task Learning from GUI-Grounded Natural Language Instructions and Demonstrations
We show SUGILITE, an intelligent task automation agent that can learn new tasks and relevant associated concepts interactively from the user’s natural language instructions and demonstrations, using
Jointly Improving Parsing and Perception for Natural Language Commands through Human-Robot Dialog
Methods for using human-robot dialog to improve language understanding for a mobile robot agent that parses natural language to underlying semantic meanings and uses robotic sensors to create multi-modal models of perceptual concepts like red and heavy are presented.
Let’s do that first! A Comparative Analysis of Instruction-Giving in Human-Human and Human-Robot Situated Dialogue
An annotation scheme is presented that captures the structure and content of task intentions in situated dialogue where humans instruct robots to perform novel action sequences and sub-sequences and identifies patterns and structural differences between human-human and human-robot communications.
Understanding Intentions in Human Teaching to Design Interactive Task Learning Robots
A taxonomy based on Collaborative Discourse Theory that organizes human teaching intentions in a human robot teaching interaction is proposed that will provide guidance for ITL robot design that leverages a human’s natural teaching skills, and reduces the cognitive burden of non-expert instructors.
Towards humanguided machine learning
  • In Proceedings of the 24th International Conference on Intelligent User Interfaces
  • 2019
Towards using transparency mechanisms to build better mental models
This work observed that the information required to identify the cause of interaction failures, can be classified into commonly-defined, uncommonly-defined and hidden features, and implemented two transparency mechanisms, question-answering and visual explanation capabilities, through which a non-expert user can access the robot’s internal reasoning.