When people reason about the behavior of others they often nd that their predictions and explanations involve attributing emotions to those about whom they are reasoning. In this paper we discuss the internal models and representations we have used to make machine reasoning of this kind possible. In doing so, we brieey sketch a simulated-world program called the AAective Reasoner. Elsewhere, we have discussed the AAective Reasoner's mechanisms for generating emotions in response to situations that impinge on an agent's concerns, for generating actions in response to emotions, and for reasoning about emotion episodes from cases Elliott, 1992]. Here we give details about how agents in the AAective Reasoner model each other's point of view for both the purpose of reasoning about one another's emotion-based actions, and for \having" emotions about the fortunes (good or bad) of others (i.e., feeling sorry for someone, feeling happy for them, resenting their good fortune, or gloating over their bad fortune). To do this, agents maintain Concerns-of-Others representations (COOs) to establish points of view for other agents, and use cases to reason about those agents' expressions of emotions.