• Publications
  • Influence
Incremental plan aggregation for generating policies in MDPs
TLDR
A way to generate policies in MDPs by determinizing the given MDP model into a classical planning problem, and using sequential Monte-Carlo simulations of the partial policies before execution, in order to assess the probability of replanning for a policy during execution is described. Expand
Path-Constrained Markov Decision Processes: bridging the gap between probabilistic model-checking and decision-theoretic planning
TLDR
A new theoretical model, named Path-Constrained Markov Decision Processes, is proposed, which allows system designers to directly optimize safe policies in a single design pass, whose possible executions are guaranteed to satisfy some probabilistic constraints on their paths. Expand
Stochastic Safest and Shortest Path Problems
TLDR
This work introduces a more general and richer dual optimization criterion, which minimizes the average (undiscounted) cost of only paths leading to the goal among all policies that maximize the probability to reach the goal. Expand
Efficient solutions for Stochastic Shortest Path Problems with Dead Ends
TLDR
This work studies a new, perhaps more natural optimization criterion capturing these problems, the Min-Cost given MaxProb (MCMP) criterion, which leads to the minimum expected cost policy among those with maximum success probability, and accurately accounts for the cost and risk of reaching dead ends. Expand
Qualitative Possibilistic Mixed-Observable MDPs
TLDR
Experimental work shows that this possibilistic version of Mixed-Observable MDPs outperforms probabilistic POMDPs commonly used in robotics, for a target recognition problem where the agent's observations are imprecise. Expand
RFF : A Robust , FF-Based MDP Planning Algorithm for Generating Policies with Low Probability of Failure
Over the years, researchers have developed many efficient techniques, such as the planners FF (Hoffmann and Nebel 2001), LPG (Gerevini, Saetti, and Serina 2003), SATPLAN (Kautz, Selman, and HoffmannExpand
POMDP-based online target detection and recognition for autonomous UAVs
TLDR
Experimental results are presented, which demonstrate that Artificial Intelligence techniques like POMDP planning can be successfully applied in order to automatically control perception and mission actions hand-in-hand for complex time-constrained UAV missions. Expand
An Online Replanning Approach for Crop Fields Mapping with Autonomous UAVs
TLDR
This paper uses a Markov Random Field framework to represent knowledge about the uncertain map and its quality, in order to compute an optimised pest-sampling policy and favourably compares on the problem of weed map construction against an existing greedy approach - the only one working online. Expand
Multi-Target Detection and Recognition by UAVs Using Online POMDPs
This paper tackles high-level decision-making techniques for robotic missions, which involve both active sensing and symbolic goal reaching, under uncertain probabilistic environments and strong timeExpand
A generic framework for anytime execution-driven planning in robotics
TLDR
This work presents a new generic and anytime planning concept for modular robotic architectures, which manages multiple planning requests at a time, solved in background, while allowing for reactive execution of planned actions at the same time. Expand
...
1
2
3
4
...