It is generally accepted that an agent needs to build models of other agents in its environment. The content of these models ranges from simple entries, such as agent capabilities, to more complex entries, such as agent intentions, goals, desires, etc. There is the problem that the information stored in these models may not be accurate in terms of matching the actual property of the agent being modelled. When this happens the agent storing the models is said to be deluded about the agent being modelled. This paper discusses our synthesis of ideas on the issue of agent delusion and presents results of some of the work we have carried out in trying to overcome delusion within agent models and in preventing the spread of delusions to other agents’ models when agents communicate/gossip with each other.