• Publications
  • Influence
Foundation and application of knowledge base verification
A theoretical foundation of anomaly detection methods is presented, and empirical results obtained in applying one anomaly detection tool to perform verification on five real‐world knowledge‐based systems are presented.
Interpretability of deep learning models: A survey of results
Some of the dimensions that are useful for model interpretability are outlined, and prior work along those dimensions are categorized, in the process of performing a gap analysis of what needs to be done to improve modelinterpretability.
Ontology Reconciliation
This chapter examines the reasons why people and organisations will tend to use different ontologies, and why the pervasive adoption of common ontologies is unlikely, and reviews alternative architectures for multipleontology systems on a large scale.
Asking 'Why' in AI: Explainability of intelligent systems - perspectives and challenges
  • A. Preece
  • Computer Science
    Intell. Syst. Account. Finance Manag.
  • 1 April 2018
Current issues concerning ML†based AI systems from the perspective of classical AI are viewed, showing that the fundamental problems are far from new, and arguing that elements of that earlier work offer routes to making progress towards explainable AI today.
The KRAFT architecture for knowledge fusion and transformation
Principles and practice in verifying rule-based systems
This paper provides a set of underlying principles for performing knowledge base verification through anomaly detection, but also a survey of the state-of-the-art in building practical tools for carrying out such verification.
Evaluating Verification and Validation Methods in Knowledge Engineering
The paper offers pointers to areas where further work needs to be done on developing more effective V&V techniques.
Kraft: An Agent Architecture for Knowledge Fusion
The paper presents the KRAFT architecture and the three kinds of agent, and includes a description of a demonstration KRAFT application in the domain of telecommunications service provision.
Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems
A model is described that identifies different roles that agents can fulfill in relation to the machine learning system, by identifying how an agent’s role influences its goals, and the implications for defining interpretability.