Florian Geißer

Learn More
We introduce the MDP-Evaluation Stopping Problem, the optimization problem faced by participants of the International Probabilistic Planning Competition 2014 that focus on their own performance. It can be constructed as a meta-MDP where actions correspond to the application of a policy on a base-MDP, which is intractable in practice. Our theoretical(More)
Supporting state-dependent action costs in planning admits a more compact representation of many tasks. We generalize the additive heuristic h add and compute it by embedding decision-diagram representations of action cost functions into the RPG. We give a theoretical evaluation and present an implementation of the generalized h add heuristic. This allows(More)
ion heuristics are a popular method to guide optimal search algorithms in classical planning. Cost partitionings allow to sum heuris-tic estimates admissibly by distributing action costs among the heuristics. We introduce state-dependent cost partitionings which take context information of actions into account, and show that an optimal state-dependent cost(More)
In General Game Playing, a player receives the rules of an unknown game and attempts to maximize his expected reward. Since 2011, the GDL-II rule language extension allows the formulation of nondeterministic and partially observable games. In this paper , we present an algorithm for such games, with a focus on the single-player case. Conceptually, at each(More)
General game playing is the research field of being able to play multiple different kinds of games with one AI. We present Eager Beaver, a general game player based on Propositional Networks with dynamic code generation and an enhanced Upper Confidence Bounds applied to Trees (UCT) algorithm as an approach to solve this problem. We ran an evaluation study(More)
  • 1