• Publications
  • Influence
What Does Explainable AI Really Mean? A New Conceptualization of Perspectives
TLDR
We characterize three notions of explainable AI that cut across research fields: opaque systems that offer no insight into its algo- rithmic mechanisms; interpretable systems where users can mathemat- ically analyze its algorithms; and comprehensible systems that emit symbols enabling user-driven explanations of how a conclusion is reached. Expand
  • 153
  • 15
  • PDF
Ultra-Strong Machine Learning: comprehensibility of programs learned with ILP
TLDR
We present two sets of experiments testing human comprehensibility of logic programs. Expand
  • 51
  • 6
  • PDF
Neural-Symbolic Learning and Reasoning: Contributions and Challenges
TLDR
This paper recalls the main contributions and discusses key challenges for neural-symbolic integration which have been identified at a recent Dagstuhl seminar. Expand
  • 103
  • 4
  • PDF
Neural-Symbolic Learning and Reasoning: A Survey and Interpretation
TLDR
The study and understanding of human behaviour is relevant to computer science, artificial intelligence, neural computation, cognitive science, philosophy, psychology, and several other areas. Expand
  • 91
  • 2
  • PDF
Computational Creativity Research: Towards Creative Machines
TLDR
Computational Creativity, Concept Invention, and General Intelligence in their own right all are flourishing research disciplines producing surprising and captivating results that continuously influence our view on where the limits of intelligent machines lie, each day pushing the boundaries a bit further. Expand
  • 38
  • 2
How Does Predicate Invention Affect Human Comprehensibility?
TLDR
We present the results of experiments testing human comprehensibility of logic programs learned with and without predicate invention. Expand
  • 25
  • 2
  • PDF
What makes a good explanation? Cognitive dimensions of explaining intelligent machines
TLDR
Explainability is assumed to be a key factor for the adoption of Artificial Intelligence systems in a wide range of contexts (Hoffman, Mueller, Klein, & Litman, 2018; Doran, Schulz, & Besold, 2017). Expand
  • 10
  • 2
  • PDF
Human-Level Artificial Intelligence Must Be a Science
TLDR
Human-level artificial intelligence (HAI) surely is a special research endeavor in more than one way: The very nature of intelligence is in the first place not entirely clear, there are no criteria commonly agreed upon necessary or sufficient for the ascription of intelligence other than similarity to human performance, there is a lack of clarity concerning how to properly investigate artificial intelligence and how to proceed after the very first steps of implementing an artificially intelligent system, etc. Expand
  • 8
  • 2
A narrative in three acts: Using combinations of image schemas to model events
TLDR
We formally investigate how combinations of image schemas (or image schematic profiles) can model essential aspects of events, and discuss benefits for artificial intelligence and cognitive systems research, in particular concerning the role of such basic events in concept formation. Expand
  • 11
  • 1
  • PDF
Towards a Domain-Independent Computational Framework for Theory Blending
TLDR
This paper proposes a logic-based framework for blending and metaphor-making and explores its applicability in settings as diverse as mathematical domain formation, classical rationality puzzles, and noun-noun combinations. Expand
  • 19
  • 1
...
1
2
3
4
5
...