Probabilistic inference as a model of planned behavior
@article{Toussaint2009ProbabilisticIA, title={Probabilistic inference as a model of planned behavior}, author={Marc Toussaint}, journal={K{\"u}nstliche Intell.}, year={2009}, volume={23}, pages={23-29} }
The problem of planning and goal-directed behavior has been addressed in computer science for many years, typically based on classical concepts like Bellman’s optimality principle, dynamic programming, or Reinforcement Learning methods – but is this the only way to address the problem? Recently there is growing interest in using probabilistic inference methods for decision making and planning. Promising about such approaches is that they naturally extend to distributed state representations and…
Figures from this paper
51 Citations
A Unified View of Algorithms for Path Planning Using Probabilistic Inference on Factor Graphs
- Computer ScienceArXiv
- 2021
This work starts by posing the path planning problem on a probabilistic factor graph, and shows how the various algorithms translate into specific message composition rules, and provides a very general framework that includes the Sum- product, the Max-product, Dynamic programming and mixed Reward/Entropy criteriabased algorithms.
Successor Representation Active Inference
- Computer ScienceArXiv
- 2022
It is demonstrated that active inference successor representations have significant advantages over current active inference agents in terms of planning horizon and computational cost and how the successor representation agent can generalize to changing reward functions such as variants of the expected free energy.
Planning and exploration in stochastic relational worlds
- Computer Science
- 2011
This thesis addresses planning and exploration in so called stochastic relational worlds which are characterized by two key attributes: they contain large numbers of objects whose properties and relationships can be manipulated, and the effects of actions are uncertain.
VIREL: A Variational Inference Framework for Reinforcement Learning
- Computer ScienceNeurIPS
- 2019
VIREL is proposed, a novel, theoretically grounded probabilistic inference framework for RL that utilises a parametrised action-value function to summarise future dynamics of the underlying MDP and it is shown that the actor-critic algorithm can be reduced to expectation-maximisation, with policy improvement equivalent to an E-step and policy evaluation to an M-step.
Solving Relational and First-Order Logical Markov Decision Processes: A Survey
- Computer ScienceReinforcement Learning
- 2012
This chapter surveys representations and techniques for Markov decision processes, reinforcement learning, and dynamic programming in worlds explicitly modeled in terms of objects and relations and discusses model-free – both value-based and policy-based – and model-based dynamic programming techniques.
A Multitask Representation Using Reusable Local Policy Templates
- Computer ScienceAAAI Spring Symposium: Designing Intelligent Robots
- 2012
An approach to solving the multitask problem through decomposing the domain into a set of capabilities based on local contexts, which resemble the options of hierarchical reinforcement learning, but provide robust behaviours capable of achieving some subgoal with the associated guarantee of achieving at least a particular aspiration level of performance.
Problem Solving as Probabilistic Inference with Subgoaling: Explaining Human Successes and Pitfalls in the Tower of Hanoi
- Computer SciencePLoS Comput. Biol.
- 2016
This study suggests that a probabilistic inference scheme enhanced with subgoals provides a comprehensive framework to study problem solving and its deficits.
Path Planning Using Probability Tensor Flows
- Computer ScienceIEEE Aerospace and Electronic Systems Magazine
- 2021
Tensor messages in the state-action space, propagated bi-directionally on a Markov chain, provide crucial information to guide the agent's decisions to model agent's motion in potentially complex grids that include goals and obstacles.
Dynamic Movement Primitives ( DMPs ) encode a desired movement trajectory in terms of the attractor
- Computer Science
- 2016
This work showcases how DMPs can be reformulated as a probabilistic linear dynamical system with control inputs, and shows how inference allows us to measure the likelihood that the authors are successfully executing a given motion primitive.
A Sufficient Statistic for Influence in Structured Multiagent Environments
- Computer ScienceJ. Artif. Intell. Res.
- 2021
This paper formalizes influence-based abstraction (IBA), which facilitates the elimination of latent state factors without any loss in value, for a very general class of problems described as factored partially observable stochastic games (fPOSGs).
References
SHOWING 1-10 OF 44 REFERENCES
Probabilistic inference for solving discrete and continuous state Markov Decision Processes
- Computer ScienceICML
- 2006
An Expectation Maximization algorithm for computing optimal policies that actually optimizes the discounted expected future return for arbitrary reward functions and without assuming an ad hoc finite total time is presented.
Probabilistic inference for solving (PO) MDPs
- Computer Science
- 2006
The approach is based on an equivalence between maximization of the expected future return in the time-unlimited MDP and likelihood maximization in a related mixture of finite-time MDPs, which allows to use expectation maximization (EM) for computing optimal policies, using arbitrary inference techniques in the E-step.
Planning by Probabilistic Inference
- Computer ScienceAISTATS
- 2003
A new approach is presented to the problem of planning under uncertainty in a probabilistic generative model involving actions and states, and the toolbox of inference techniques are brought to bear on the planning problem.
Goal-Based Imitation as Probabilistic Inference over Graphical Models
- Computer ScienceNIPS
- 2005
This paper shows that the problem of goal-based imitation can be formulated as one of inferring goals and selecting actions using a learned probabilistic graphical model of the environment, and describes algorithms for planning actions to achieve a goal state using Probabilistic inference.
Decision-Theoretic Planning: Structural Assumptions and Computational Leverage
- Computer ScienceJ. Artif. Intell. Res.
- 1999
This paper presents an overview and synthesis of MDP-related methods, showing how they provide a unifying framework for modeling many classes of planning problems studied in AI, and describes structural properties of M DPs that, when exhibited by particular classes of problems, can be exploited in the construction of optimal or approximately optimal policies or plans.
Probabilistic inference for structured planning in robotics
- Computer Science2007 IEEE/RSJ International Conference on Intelligent Robots and Systems
- 2007
A new approach to planning in robotics based on probabilistic inference is proposed that uses structured Dynamic Bayesian Networks to represent the scenario and efficient inference techniques (loopy belief propagation) to solve planning problems.
Anytime Point-Based Approximations for Large POMDPs
- Computer ScienceJ. Artif. Intell. Res.
- 2006
The point selection procedure is combined with point-based value backups to form an effective anytime POMDP algorithm called Point-Based Value Iteration (PBVI), and a theoretical analysis justifying the choice of belief selection technique is presented.
Policy Recognition in the Abstract Hidden Markov Model
- Computer ScienceJ. Artif. Intell. Res.
- 2002
This paper introduces the Abstract Hidden Markov Model (AHMM), a novel type of stochastic processes, provide its dynamic Bayesian network (DBN) structure and analyse the properties of this network, and proposes a novel plan recognition framework based on the AHMM as the plan execution model.
Approximate inference for planning in stochastic relational worlds
- Computer ScienceICML '09
- 2009
This work proposes to convert learned noisy probabilistic relational rules into a structured dynamic Bayesian network representation and evaluates the effectiveness of this approach for online planning in a 3D simulated blocksworld with an articulated manipulator and realistic physics.
Hierarchical POMDP Controller Optimization by Likelihood Maximization
- Computer ScienceUAI
- 2008
This paper first transforms the problem into a dynamic Bayesian network through which a hierarchical structure can naturally be discovered while optimizing the policy, and demonstrates that this approach scales better than previous techniques based on non-convex optimization.