Kennard Laviers

Learn More
Although in theory opponent modeling can be useful in any adversarial domain, in practice it is both difficult to do accurately and to use effectively to improve game play. In this paper, we present an approach for online opponent modeling and illustrate how it can be used to improve offensive performance in the Rush 2008 football game. In football, team(More)
One drawback with using plan recognition in adver-sarial games is that often players must commit to a plan before it is possible to infer the opponent's intentions. In such cases, it is valuable to couple plan recognition with plan repair, particularly in multi-agent domains where complete replanning is not computationally feasible. This paper presents a(More)
Plays are sequences of actions to be undertaken by a collection of agents, or teammates. The success of a play depends on a number of factors including, perhaps most importantly, the opponent's play. In this paper, we present an approach for online opponent model-ing and illustrate how it can be used to improve offensive performance in the Rush 2008(More)
An issue with learning effective policies in multi-agent adversarial games is that the size of the search space can be prohibitively large when the actions of both teammates and opponents are considered simultaneously. Opponent modeling, predicting an opponent's actions in advance of execution, is one approach for selecting actions in adversarial settings,(More)
This paper addresses the problem of identifying player coordination patterns in multi-player adversarial games. In the Rush 2008 football simulator, we observe that each play relies on the efforts of different subgroups within the main team to score team touchdowns. We present a method to automatically identify these subgroups from historical play data(More)
In physical domains (military or athletic), team behaviors often have an observable spatio-temporal structure, defined by the relative physical positions of team members over time. In this paper, we demonstrate that this structure can be exploited to recognize football plays in the Rush 2008 football simulator. Although events in the simulator are(More)
One issue with learning effective policies in multi-agent adversar-ial games is that the size of the search space can be prohibitively large when the actions of all the players are considered simultaneously. In most team games, players need to coordinate to accomplish tasks, either in a preplanned or emergent manner. An effective team policy must generate(More)
  • 1