Florian Geißer

Learn More
We introduce the MDP-Evaluation Stopping Problem, the optimization problem faced by participants of the International Probabilistic Planning Competition 2014 that focus on their own performance. It can be constructed as a meta-MDP where actions correspond to the application of a policy on a base-MDP, which is intractable in practice. Our theoretical(More)
Supporting state-dependent action costs in planning admits a more compact representation of many tasks. We generalize the additive heuristic h add and compute it by embedding decision-diagram representations of action cost functions into the RPG. We give a theoretical evaluation and present an implementation of the generalized h add heuristic. This allows(More)
In General Game Playing, a player receives the rules of an unknown game and attempts to maximize his expected reward. Since 2011, the GDL-II rule language extension allows the formulation of nondeterministic and partially observable games. In this paper , we present an algorithm for such games, with a focus on the single-player case. Conceptually, at each(More)
Localization in dynamic environments is still a challenging problem in robotics - especially if rapid and large changes occur irregularly. Inspired by SLAM algorithms, our Bayesian approach to this so-called dynamic localization problem divides it into a localization problem and a mapping problem, respectively. To tackle the localization problem we use a(More)
ion heuristics are a popular method to guide optimal search algorithms in classical planning. Cost partitionings allow to sum heuris-tic estimates admissibly by distributing action costs among the heuristics. We introduce state-dependent cost partitionings which take context information of actions into account, and show that an optimal state-dependent cost(More)
  • 1