Corpus ID: 198179734

Multilevel Monte-Carlo for Solving POMDPs Online

  title={Multilevel Monte-Carlo for Solving POMDPs Online},
  author={Marcus H{\"o}rger and H. Kurniawati and A. Elfes},
Planning under partial obervability is essential for autonomous robots. A principled way to address such planning problems is the Partially Observable Markov Decision Process (POMDP). Although solving POMDPs is computationally intractable, substantial advancements have been achieved in developing approximate POMDP solvers in the past two decades. However, computing robust solutions for systems with complex dynamics remain challenging. Most on-line solvers rely on a large number of forward… Expand
An On-Line POMDP Solver for Continuous Observation Spaces
A new on-line POMDP solver, called Lazy Belief Extraction for Continuous PomDPs (LABECOP), that combines methods from Monte-Carlo-Tree-Search and particle filtering to construct a policy reprentation which doesn't require discretised observation spaces and avoids limiting the number of observations considered during planning. Expand
Partially Observable Markov Decision Processes (POMDPs) and Robotics
A review of PomDPs is presented, emphasizing computational issues that have hindered its practicality in robotics and ideas in sampling-based solvers that have alleviated such difficulties, together with lessons learned from applying POMDPs to physical robots. Expand
Online POMDP Planning via Simplification
A novel algorithmic approach, Simplified Information Theoretic Belief Space Planning (SITHBSP), which aims to speed-up POMDP planning considering belief-dependent rewards, without compromising on the solution's accuracy, by mathematically relating the simplified elements of the problem to the corresponding counterparts of the original problem. Expand
Simplified Belief-Dependent Reward MCTS Planning with Guaranteed Tree Consistency
This paper presents Simplified Information-Theoretic Particle Filter Tree (SITH-PFT), a novel variant to the MCTS algorithm that considers information-theoretic rewards but avoids the need to calculate them completely. Expand
A Review of Current Approaches for UAV Autonomous Mission Planning for Mars Biosignatures Detection
Recognising the importance of astrobiology in Mars exploration, progress is highlighted in the area of autonomous biosignature detection capabilities trialed on Earth, and the objectives and challenges in relation to future missions to Mars are discussed. Expand


A Software Framework for Planning Under Partial Observability
The proposed OPPT provides an easy-to-use plug-in architecture with interfaces to the high-fidelity simulator Gazebo that, in conjunction with user-friendly configuration files, allows users to specify POMDP models of a standard class of robot motion planning under partial observability problems with no additional coding effort. Expand
An Online POMDP Solver for Uncertainty Planning in Dynamic Environment
A new online POMDP solver, called Adaptive Belief Tree (ABT), that can reuse and improve existing solution, and update the solution as needed whenever the POM DP model changes, and converges to the optimal solution of the current PomDP model in probability. Expand
TAPIR: A software toolkit for approximating and adapting POMDP solutions online
The need for a constant, fully known POMDP model is averted by implementing the recent Adaptive Belief Tree (ABT) algorithm, while user-friendliness is ensured by a welldocumented modular design, which also includes interfaces for the commonly-used Robotics Operating System framework, and the high fidelity simulator V-REP. Expand
Monte-Carlo Planning in Large POMDPs
POMCP is the first general purpose planner to achieve high performance in such large and unfactored POMDPs as 10 x 10 battleship and partially observable PacMan, with approximately 1018 and 1056 states respectively. Expand
An online and approximate solver for POMDPs with continuous action space
General Pattern Search in Adaptive Belief Tree (GPS-ABT), an approximate and online POMDP solver for problems with continuous action spaces and results on a box pushing and an extended Tag benchmark problem are promising. Expand
SARSOP: Efficient Point-Based POMDP Planning by Approximating Optimally Reachable Belief Spaces
This work has developed a new point-based POMDP algorithm that exploits the notion of optimally reachable belief spaces to improve com- putational efficiency and substantially outperformed one of the fastest existing point- based algorithms. Expand
Importance sampling for online planning under uncertainty
IS-DESPOT is presented, which introduces importance sampling to DESPOT, a state-of-the-art sampling-based POMDP algorithm for planning under uncertainty, and it is demonstrated empirically that importance sampling significantly improves the performance of online PomDP planning for suitable tasks. Expand
PUMA: Planning Under Uncertainty with Macro-Actions
This paper presents a POMDP algorithm for planning under uncertainty with macro-actions (PUMA) that automatically constructs and evaluates open-loop macro- actions within forward-search planning, and shows how to incrementally refine the plan over time, resulting in an anytime algorithm that provably converges to an ∊-optimal policy. Expand
Online Algorithms for POMDPs with Continuous State, Action, and Observation Spaces
Two new algorithms, POMCPOW and PFT-DPW, are proposed and evaluated that overcome this deficiency by using weighted particle filtering and Simulation results show that these modifications allow the algorithms to be successful where previous approaches fail. Expand
An On-Line Planner for POMDPs with Large Discrete Action Space: A Quantile-Based Approach
Experiments indicate that QBASE can generate substantially better strategies than a state-of-the-art method, and uses quantile-statistics to adaptively evaluate a small subset of the action space without sacrificing the quality of the generated decision strategies. Expand