We propose a formal approach to the problem of prediction based on the following steps: First, a mental-level model is constructed based on the agent's previous actions; next, the model is updated to account for any new observations by the agent, and finally, we predict the optimal action w.r.t. the agent's mental state as its next action. This paper formalizes this prediction process. In order to carry out this process, we need to understand how a mental state can be ascribed to an agent and how this mental state should be updated. In [Brafman and Tennenholtz, 1994b], we examined the first stage. Here we investigate a particular update operator and show that its ascription requires making only weak modeling assumptions. 1 I n t r o d u c t i o n Tools for representing information about other agents are crucial in many contexts. Often, the goal of maintaining such information is to facilitate prediction of other agents' behavior, so that we can function better in their presence. Mental-level models, models that use formal counterparts of various mental states to describe the state of an agent, provide tools for representing such information. Once we have a model of an agent's mental state, we can use it to predict future actions by finding out what an agent in such a state would perceive as its best action. The goal of this paper is to advance our understanding of basic questions related to the construction of a mental-level model, and in particular its application to prediction. The idea of ascribing mental qualities for the purpose of prediction is not new. John McCarthy discusses it in [McCarthy, 1979]. An important aspect of his approach is that even when nothing in the internal structure of the entity modeled directly resembles beliefs, desires, or other mental qualities, it may be possible and useful to model it as if it has such qualities. Thus, McCarthy views mental qualities as abstractions. This view is shared by another well-known author, Allen Newell [Newell, 1980], who contemplates the possibility of viewing computer programs at a level more abstract than that of the programming language, which he calls the knowledge-level. The notion of a mental state is useful because it is abstract. Models at more specific levels, e.g., mechanical and biological models, are difficult to construct. They require information that we often do not have, such as the mechanical structure of the agent, or its program. On the other hand, mental-level models can be constructed based on observable facts the agent's behavior together with some background knowledge. In fact, as McCarthy points out, we might sometimes want to use these models even when we have precise lower level specifications of the agent, e.g. C code. We might do this either because the mental-level description is more intuitive or because computationally it is less complex to work with. We present a formalism that attempts to make these ideas more concrete and that will hopefully lead to better understanding of how the ascription of mental state could be mechanized. Motivated by work in decisiontheory [Luce and Raiffa, 1957] and work on knowledge ascription [Halpern and Moses, 1990; Rosenschein, 1985], we suggested in [Brafman and Tennenholtz, 1994b] a specific structure for mental-level models, consisting of beliefs, desires and a decision criterion. This model showed how these elements act as constraints on the agent's action, and how these constraints can be used to ascribe beliefs to the agent. We would like to use this model in a particular prediction context, where we observe an agent performing part of a task, we know its goal, and we would like to predict its next actions. We use the following process: first, we ascribe beliefs to the agent based on the behavior we have seen so far. Next, we update the ascribed beliefs based on observations the agent makes, e.g., new information it has access to or the outcomes of its past actions. Then, in order to predict the agent's next action, we examine what action would be perceived as best by an agent in this mental state. In order to perform this prediction process, we must understand how beliefs can be ascribed, how they should be updated, and how they should be used to determine the best perceived action. We have examined the first and the last question in [Brafman and Tennenholtz, 1994b] (although not in the context of prediction). In this paper, we wish to concentrate on the second question, that of modeling the agent's belief change. 2010 TEMP0RAI REASONING The reader should not confuse this last question w i t h another impor tan t question which has received much at tent ion: how should an agent change its beliefs given new informat ion? (For example, see [Levesque, 1984; Fr iedman and Halpern, 1994; Katsuno and Mendelzon, 1991; del Va l and Shoham, 1993; Alchourron et a/., 1985; Goldszmidt and Pearl , 1992].) In our work we are con cerned w i t h external ly model ing the changes occurr ing w i th in the agent rather than saying how that agent should update i ts beliefs. A l though that agent may be implement ing one of the above belief revision methods, it is quite possible tha t it has no expl ici t representation of beliefs and tha t its " idea" of update is some complex assembler rout ine. Our discussion of the problem of predict ion wi l l be in the context of the framework of mental-level mod eling and belief ascr ipt ion investigated in [Brafman and Tennenholtz, 1994b]. This f ramework is reviewed in Sec t ion 2. In Section 3 we discuss the problem of predict ion. We suggest a three-step process for predict ion and high l ight the importance of the ascript ion of a belief change operator to this process. In Section 4 we introduce a part icular belief change operator and show that it has desirable propert ies f rom a decision-theoretic perspec t ive. Moreover, we show that under min ima l assump t ions, this belief change operator can always be ascribed to an agent.