• Corpus ID: 225062130

Towards human-agent knowledge fusion (HAKF) in support of distributed coalition teams

@article{Braines2020TowardsHK,
  title={Towards human-agent knowledge fusion (HAKF) in support of distributed coalition teams},
  author={Dave Braines and Federico Cerutti and Marc Roig Vilamala and Mani B. Srivastava and Lance M. Kaplan and Alun David Preece and Gavin Pearson},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.12327}
}
Future coalition operations can be substantially augmented through agile teaming between human and machine agents, but in a coalition context these agents may be unfamiliar to the human users and expected to operate in a broad set of scenarios rather than being narrowly defined for particular purposes. In such a setting it is essential that the human agents can rapidly build trust in the machine agents through appropriate transparency of their behaviour, e.g., through explanations. The human… 
2 Citations

Figures from this paper

Coalition situational understanding via explainable neuro-symbolic reasoning and learning
TLDR
This work describes an integrated CSU architecture that combines neural networks with symbolic learning and reasoning to address the problem of sparse training data, and demonstrates how explainability can be achieved for deep neural networks operating on multimodal sensor feeds.
An Experimentation Platform for Explainable Coalition Situational Understanding
We present an experimentation platform for coalition situational understanding research that highlights capabilities in explainable artificial intelligence/machine learning (AI/ML) and integration of

References

SHOWING 1-10 OF 10 REFERENCES
Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI
Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems
TLDR
A model is described that identifies different roles that agents can fulfill in relation to the machine learning system, by identifying how an agent’s role influences its goals, and the implications for defining interpretability.
DeepCEP: Deep Complex Event Processing Using Distributed Multimodal Information
TLDR
DeepCEP is proposed, a framework that integrates the concepts of deep learning models with complex event processing engines to make inferences across distributed, multimodal information streams with complex spatial and temporal dependencies.
Interpretability of deep learning models: A survey of results
  • Supriyo Chakraborty, Richard J. Tomsett, Prudhvi K. Gurram
  • Computer Science
    2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI)
  • 2017
TLDR
Some of the dimensions that are useful for model interpretability are outlined, and prior work along those dimensions are categorized, in the process of performing a gap analysis of what needs to be done to improve modelinterpretability.
VADR: Discriminative Multimodal Explanations for Situational Understanding
TLDR
This work adapts established state-of-the-art explainability techniques to mid-level fusion networks in order to better understand which modality of the input contributes most to a model's decision and which parts of the data are most relevant to that decision.
Evidential Deep Learning to Quantify Classification Uncertainty
TLDR
This work treats predictions of a neural net as subjective opinions and learns the function that collects the evidence leading to these opinions by a deterministic neural net from data, which achieves unprecedented success on detection of out-of-distribution queries and endurance against adversarial perturbations.
UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild
TLDR
This work introduces UCF101 which is currently the largest dataset of human actions and provides baseline action recognition results on this new dataset using standard bag of words approach with overall performance of 44.5%.
An Experimentation Platform for Explainable Coalition Situational Understanding
We present an experimentation platform for coalition situational understanding research that highlights capabilities in explainable artificial intelligence/machine learning (AI/ML) and integration of
Increasing negotiation performance at the edge of the network
TLDR
It is empirically show that agents using ACOP can significantly reduce the number of messages a negotiation takes, independently of the strategy agents choose, and when an agreement is possible it reaches this agreement sooner with no negative effect on the utility.