Learn More
Modern complex games and simulations pose many challenges for an intelligent agent, including partial observability, continuous time and effects, hostile opponents, and exogenous events. We present ARTUE (Autonomous Response to Unexpected Events), a domain-independent autonomous agent that dynamically reasons about what goals to pursue in response to(More)
While several researchers have applied case-based reasoning techniques to games, only Ponsen and Spronck (2004) have addressed the challenging problem of learning to win real-time games. Focusing on WARGUS, they report good results for a genetic algorithm that searches in plan space, and for a weighting algorithm (dynamic scripting) that biases subplan(More)
Planning in dynamic continuous environments requires reasoning about nonlinear continuous effects, which previous Hierarchical Task Network (HTN) planners do not support. In this paper, we extend an existing HTN planner with a new state projection algorithm. To our knowledge, this is the first HTN planner that can reason about nonlinear continuous effects.(More)
Dynamic changes in complex, real-time environments, such as modern video games, can violate an agent's expectations. We describe a system that responds competently to such violations by changing its own goals, using an algorithm based on a conceptual model for goal driven autonomy. We describe this model, clarify when such behavior is beneficial, and(More)
To operate autonomously in complex environments, an agent must monitor its environment and determine how to respond to new situations. To be considered intelligent, an agent should select actions in pursuit of its goals, and adapt accordingly when its goals need revision. However, most agents assume that their goals are given to them; they cannot recognize(More)
We consider the problem of automated planning and control for an execution agent operating in environments that are partially-observable with deterministic exogenous events. We describe a new formalism and a new algorithm, DISCOVERHISTORY, that enables our agent, DHAgent, to proactively expand its knowledge of the environment during execution by forming(More)
Although in theory opponent modeling can be useful in any adversarial domain, in practice it is both difficult to do accurately and to use effectively to improve game play. In this paper, we present an approach for online opponent modeling and illustrate how it can be used to improve offensive performance in the Rush 2008 football game. In football, team(More)
Although several researchers have integrated methods for reinforcement learning (RL) with case-based reasoning (CBR) to model continuous action spaces, existing integrations typically employ discrete approximations of these models. This limits the set of actions that can be modeled, and may lead to non-optimal solutions. We introduce the Continuous Action(More)
In this paper, we investigate the hypothesis that plan recognition can significantly improve the performance of a case-based reinforcement learner in an adversarial action selection task. Our environment is a simplification of an Ameri-can football game. The performance task is to control the behavior of a quarterback in a pass play, where the goal is to(More)