Arguably the dominant paradigm in agent development is the BeliefDesire-Intention (BDI) model [?]. In BDI-based agent programming languages, the behaviour of an agent is specified in terms of beliefs, goals, and plans. Beliefs represent the agent’s information about the environment (and itself). Goals represent desired states of the environment the agent is trying to bring about. Plans are the means by which the agent can modify the environment in order to achieve its goals. Plans may include sub-goals, and each sub-goal is in turn achieved by some other plan. The set of plans is pre-defined by the agent developer, and, together with the agent’s initial beliefs and goals, form the program of the agent. The execution of a BDI agent consists of a repeated cycle of: updating the agent’s beliefs and goals to reflect the current state of the environment, selecting plans to achieve the agent’s current (sub)goals based on the agent’s current beliefs, and finally executing one or more steps of the agent’s currently intended plans. For each top-level goal, the agent selects a plan which forms the root of an intention, and commences the plan. If the next step in an intention is a subgoal, a sub-plan is selected to achieve the subgoal and pushed onto the intention, and the steps in the sub-plan are then executed and so on. This process of repeatedly choosing and executing plans is referred to the agent’s deliberation cycle. In many BDI agent architectures, the execution of the plans comprising the agent’s intentions is interleaved, e.g., when execution of a plan in one intention reaches a subgoal, the agent may switch to executing a plan in a different intention. Interactions between different intentions may result in conflicts, e.g., where the execution of a plan in one intention makes the execution of another plan in the same or another intention impossible or renders a goal unachievable. The task of anticipating and avoiding such conflicts is generally left to the agent developer. However, the run-time plan selection characteristic of BDI agents makes it difficult to anticipate all the ways in which an agent program may be executed, and harder to ensure that conflicts cannot arise. Ideally, the agent itself should be able to reason about possible conflicts between its intentions, and schedule their execution so as to avoid conflicts. In this paper, we present a novel approach to intention scheduling for BDI agents based on Single-Player Monte-Carlo Tree Search (SP-MCTS) that avoids conflicts between intentions. We evaluate the performance of our approach and compare it to a previous approach to intention scheduling based on summary information . Our preliminary experimental results indicate that our approach performs at least as well as approach to scheduling intention using summary information.