Learn More
In this paper we present a new approach to the problem of planning with incomplete information and sensing. Our approach is based on a higher level, " knowledge-based " , representation of the planner's knowledge and of the domain actions. In particular, in our approach we use a set of formulas from a first-order modal logic of knowledge to represent the(More)
In (Petrick and Bacchus 2002), a " knowledge-level " approach to planning under incomplete knowledge and sensing was presented. In comparison with alternate approaches based on representing sets of possible worlds, this higher-level representation is richer, but the inferences it supports are weaker. Nevertheless, because of its richer representation, it is(More)
This paper formalises Object–Action Complexes (OACs) as a basis for symbolic representations of sensory–motor experience and behaviours. OACs are designed to capture the interaction between objects and associated actions in artificial cognitive systems. This paper gives a formal definition of OACs, provides examples of their use for autonomous cognitive(More)
We introduce a humanoid robot bartender that is capable of dealing with multiple customers in a dynamic, multi-party social setting. The robot system incorporates state-of-the-art components for computer vision, linguistic processing, state management, high-level reasoning, and robot control. In a user evaluation, 31 participants interacted with the(More)
Natural language generation (NLG) is a major subfield of computational linguistics with a long tradition as an application area of automated planning systems. While things were relatively quiet with the planning approach to NLG for a while, several recent publications have sparked a renewed interest in this area. In this paper, we investigate the extent to(More)
Agents learning to act autonomously in real-world domains must acquire a model of the dynamics of the domain in which they operate. Learning domain dynamics can be challenging, especially where an agent only has partial access to the world state, and/or noisy external sensors. Even in standard STRIPS domains, existing approaches cannot learn from noisy,(More)
A robot coexisting with humans must not only be able to perform physical tasks, but must also be able to interact with humans in a socially appropriate manner. In many social settings, this involves the use of social signals like gaze, facial expression, and language. In this paper, we describe an application of planning to task-based social interaction(More)
— Robot task planning is an inherently challenging problem, as it covers both continuous-space geometric reasoning about robot motion and perception, as well as purely symbolic knowledge about actions and objects. This paper presents a novel " knowledge of volumes " framework for solving generic robot tasks in partially known environments. In particular,(More)
We investigate the problem of learning action effects in partially observable STRIPS planning domains. Our approach is based on a voted kernel perceptron learning model, where action and state information is encoded in a compact vector representation as input to the learning mechanism, and resulting state changes are produced as output. Our approach relies(More)