Ronald P. A. Petrick

Learn More
In this paper we present a new approach to the problem of planning with incomplete information and sensing. Our approach is based on a higher level, “knowledge-based”, representation of the planner’s knowledge and of the domain actions. In particular, in our approach we use a set of formulas from a first-order modal logic of knowledge to represent the(More)
In (Petrick and Bacchus 2002), a “knowledge-level” approach to planning under incomplete knowledge and sensing was presented. In comparison with alternate approaches based on representing sets of possible worlds, this higher-level representation is richer, but the inferences it supports are weaker. Nevertheless, because of its richer representation, it is(More)
Natural language generation (NLG) is a major subfield of computational linguistics with a long tradition as an application area of automated planning systems. While things were relatively quiet with the planning approach to NLG for a while, several recent publications have sparked a renewed interest in this area. In this paper, we investigate the extent to(More)
The problem of planning dialog moves can be viewed as an instance of the more general AI problem of planning with incomplete information and sensing. Sensing actions complicate the planning process since such actions engender potentially infinite state spaces. We adapt the Linear Dynamic Event Calculus (LDEC) to the representation of dialog acts using(More)
This paper formalises Object–Action Complexes (OACs) as a basis for symbolic representations of sensory–motor experience and behaviours. OACs are designed to capture the interaction between objects and associated actions in artificial cognitive systems. This paper gives a formal definition of OACs, provides examples of their use for autonomous cognitive(More)
We describe an approach to integrated robot control, high-level planning, and action effect learning that attempts to overcome the representational difficulties that exist between these diverse areas. Our approach combines ideas from robot vision, knowledgelevel planning, and connectionist machine learning, and focuses on the representational needs of these(More)
We introduce a humanoid robot bartender that is capable of dealing with multiple customers in a dynamic, multi-party social setting. The robot system incorporates state-of-the-art components for computer vision, linguistic processing, state management, high-level reasoning, and robot control. In a user evaluation, 31 participants interacted with the(More)
Robot task planning is an inherently challenging problem, as it covers both continuous-space geometric reasoning about robot motion and perception, as well as purely symbolic knowledge about actions and objects. This paper presents a novel “knowledge of volumes” framework for solving generic robot tasks in partially known environments. In(More)
Agents learning to act autonomously in realworld domains must acquire a model of the dynamics of the domain in which they operate. Learning domain dynamics can be challenging, especially where an agent only has partial access to the world state, and/or noisy external sensors. Even in standard STRIPS domains, existing approaches cannot learn from noisy,(More)