The Airport Example The Omelette Example Copyright c 1996 American Association for Artificial Intelligence. All rights reserved. Despite the existence of programs that are able to generate so-called conditional plans, there has yet to emerge a clear and general specification of what it is these programs are looking for: what exactly is a plan in this setting, and when is it correct? In this paper, we develop and motivate a specification within the situation calculus of conditional and iterative plans over domains that include binary sensing actions. The account is built on an existing theory of action which includes a solution to the frame problem, and an extension to it that handles sensing actions and the effect they have on the knowledge of a robot. Plans are taken to be programs in a new simple robot program language, and the planning task is to find a program that would be known by the robot at the outset to lead to a final situation where the goal is satisfied. This specification is used to analyze the correctness of a small example plan, as well as variants that have redundant or missing sensing actions. We also investigate whether the proposed robot program language is powerful enough to serve for any intuitively achievable goal. Thanks to the members of the University of Toronto Cognitive Robotics group (Yves Lesperance, Fangzhen Lin, Daniel Marcu, Ray Reiter, and Richard Scherl) and to Fahiem Bacchus, for discussion, comments, and suggestions. A special thanks to Yves for helping with the definition in Section 3, and to Fangzhen for asking and helping to answer the question of Section 5. This research was made possible by financial support from the Information Technology Research Center, the Institute for Robotics and Intelligent Systems, and the Natural Science and Engineering Research Council. They also had to pick up the asinine AAAI fee for extra pages. However, see  for some ideas on how to generate plans containing loops (when there is no sensing). Department of Computer Science University of Toronto Toronto, ON, M5S 3H5 Canada email@example.com Much of high-level symbolic AI research has been concerned with planning: specifying the behaviour of intelligent agents by providing goals to be achieved or maintained. In the simplest case, the output of a planner is a sequence of actions to be performed by the agent. However, a number of researchers are investigating the topic of (see for example, [3, 9, 14, 17]) where the output, for one reason or another, is not expected to be a fixed sequence of actions, but a more general specification involving conditionals and iteration. In this paper, we will be concerned with conditional planning problems where what action to perform next in a plan may depend on the result of an earlier . Consider the following motivating example: The local airport has only two boarding gates, Gate A and Gate B. Every plane will be parked at one of the two gates. In the initial state, you are at home. From home, it is possible to go to the airport, and from there you can go directly to either gate. At the airport, it is also possible to check the departures screen, to find out what gate a flight will be using. Once at a gate, the only thing to do is to board the plane that is parked there. The goal is to be on the plane for Flight 123. There clearly is no sequence of actions that can be shown to achieve the desired goal: which gate to go to depends on the (runtime) result of checking the departure screen. Surprisingly, despite the existence of planners that are able to solve simple problems like this, there has yet to emerge a clear specification of what it is that these planners are looking for: what is a plan in this setting, and when is it correct? In this paper, we will propose a new definition, show some examples of plans that meet (and fail to meet) the specification, and argue for the utility of this specification independent of plan generation. What we will do in this paper is propose a new planning procedure. In many cases, existing procedures like the one presented in  will be adequate, given various representational restrictions. Moreover, our specification goes beyond what can be handled by existing planning procedures, including problems like the following: We begin with a supply of eggs, some of which may be bad, but at least 3 of which are good. We have a bowl and a saucer, which can be emptied at any time. It is possible to break a new egg into the saucer, if it is empty, or into the bowl. By smelling a container, it is possible to tell if it contains a bad egg. Also, the contents of the saucer can be transferred to the bowl. The goal is to get 3 good eggs and no bad ones into the bowl. While it is far from clear how to automatically generate a plan to solve a problem like this, our account, at least, will make clear what a solution ought to be.