The problem of selecting action in environments that are dynamic and not completely predictable or observable is a central problem in intelligent behavior. From an AI point of view, the problem is to design a mechanism that can select the best actions given information provided by sensors and a suitable model of the actions and goals. We call this the problem of Planning as it is a direct generalization of the problem considered in Planning research where feedback is absent and the eeect of actions is assumed to be predictable. In this paper we present an approach to Planning that combines ideas and methods from Operations Research and Artiicial Intelligence. Basically Planning problems are described in high-level action languages that are compiled into general mathematical models of sequential decisions known as Markov Decision Processes or Partially Observable Markov Decision Processes, which are then solved by suitable Heuristic Search Algorithms. The result are controllers that map sequences of observations into actions, and which, under certain conditions can be shown to be optimal. We show how this approach applies to a number of concrete problems and discuss its relation to work in Reinforcement Learning.