Discovering User-Interpretable Capabilities of Black-Box Planning Agents

@article{Verma2022DiscoveringUC,
  title={Discovering User-Interpretable Capabilities of Black-Box Planning Agents},
  author={Pulkit Verma and Shashank Rao Marpally and Siddharth Srivastava},
  journal={Proceedings of the Nineteenth International Conference on Principles of Knowledge Representation and Reasoning},
  year={2022}
}
Several approaches have been developed for answering users' specific questions about AI behavior and for assessing their core functionality in terms of primitive executable actions. However, the problem of summarizing an AI agent's broad capabilities for a user is comparatively new. This paper presents an algorithm for discovering from scratch the suite of high-level "capabilities" that an AI system with arbitrary internal planning algorithms/policies can perform. It computes conditions… 

JEDAI: A System for Skill-Aligned Explainable Robot Planning

TLDR
JEDAI features a novel synthesis of research ideas from integrated task and motion planning and explainable AI that helps users create high-level, intuitive plans while ensuring that they will be executable by the robot.

References

SHOWING 1-10 OF 69 REFERENCES

Composable Planning with Attributes

TLDR
This work considers a setup in which an environment is augmented with a set of user defined attributes that parameterize the features of interest, and proposes a method that learns a policy for transitioning between "nearby" sets of attributes, and maintains a graph of possible transitions.

Asking the Right Questions: Learning Interpretable Action Models Through Query Answering

TLDR
A new paradigm for estimating an interpretable, relational model of a black-box autonomous agent that can plan and act is developed using a rudimentary query interface with the agent and a hierarchical querying algorithm that generates an interrogation policy for estimating the agent's internal model in a user-interpretable vocabulary.

Differential Assessment of Black-Box AI Agents

TLDR
This work proposes a novel approach to differentially assess black-box AI agents that have drifted from their previously known models and generates an active querying policy that selectively queries the agent and computes an updated model of its functionality.

A review of learning planning action models

TLDR
A survey of the machine learning techniques applied for learning planning action models is presented, which describes the characteristics of learning systems and presents some open issues.

From Skills to Symbols: Learning Symbolic Representations for Abstract High-Level Planning

TLDR
The results establish a principled link between high-level actions and abstract representations, a concrete theoretical foundation for constructing abstract representations with provable properties, and a practical mechanism for autonomously learning abstract high- level representations.

Learning action models from plan examples using weighted MAX-SAT

Symbolic Plans as High-Level Instructions for Reinforcement Learning

TLDR
An empirical evaluation shows that the use of techniques from knowledge representation and reasoning as a framework for defining final-state goal tasks and automatically producing their corresponding reward functions converges to near-optimal solutions faster than standard RL and HRL methods.

Learning action models with minimal observability

Learning Interpretable Models Expressed in Linear Temporal Logic

TLDR
The problem of learning a Linear Temporal Logic formula that parsimoniously captures a given set of positive and negative example traces is introduced and the approach to learning LTL exploits a symbolic state representation, searching through a space of labeled skeleton formulae to construct an alternating automaton that models observed behavior.
...