Learn More
Truly smart systems need to interface with the behaviour of human and non human actors in their surroundings. Systems with such interfaces could prove beneficial in supporting those with non standard communication practices, the elderly living alone, people with disabilities, and many others. While the benefits are clear, the means of achieving true(More)
We present an overview of different theories of explanation from the philosophy and cognitive science communities. Based on these theories, as well as models of explanation from the knowledge-based systems area, we present a framework for explanation in case-based reasoning (CBR) based on explanation goals. We propose ways that the goals of the user and(More)
In this paper, we describe an approach to modelling context-aware systems starting on the knowledge level. We make use of ideas from Activity Theory to structure the general context model and to assess empirical data. We further on describe how the data-driven and the model-driven aspects of our approach are combined into a single knowledge model. We(More)
Research on explanation in Case-Based Reasoning (CBR) is a topic that gains momentum. In this context, fundamental issues on what are and to which end do we use explanations have to be reconsidered. This article presents a prelimenary outline of the combination of two recently proposed classifications of explanations based on the type of the explanation(More)
In this paper, we present a short overview of different theories of explanation. We argue that the goals of the user should be taken into account when deciding what is a good explanation for a given CBR system. Some general types relevant to many Case-Based Reasoning (CBR) systems are identified and we use these goals to identify some limitations in using(More)
We have argued elsewhere that user goals should be taken into account when deciding what kind of explanation of its results a CBR system should give. In this paper, we propose the use of an Activity Theory based methodology for identifying different user goals and expectations towards explanations given by a system supporting a work process.
Interacting with intelligent systems in general and ambient intelligent systems in particular, requires that these systems have the ability to build a trust relationship with the users. The ability to explain its own behaviour is one of the most important abilities that such a system can exhibit to gain trust. We argue that explanations are not just an(More)