pymdp: A Python library for active inference in discrete state spaces

@article{Heins2022pymdpAP,
  title={pymdp: A Python library for active inference in discrete state spaces},
  author={Conor Heins and Beren Millidge and Daphne Demekas and Brennan Klein and Karl John Friston and Iain D. Couzin and Alexander Tschantz},
  journal={J. Open Source Softw.},
  year={2022},
  volume={7},
  pages={4098}
}
1 Department of Collective Behaviour, Max Planck Institute of Animal Behavior, 78457 Konstanz, Germany 2 Centre for the Advanced Study of Collective Behaviour, 78457 Konstanz, Germany 3 Department of Biology, University of Konstanz, 78457 Konstanz, Germany 4 VERSES Research Lab, Los Angeles, California, USA 5 MRC Brain Networks Dynamics Unit, University of Oxford, Oxford, UK 6 Department of Computing, Imperial College London, London, UK 7 Network Science Institute, Northeastern University… 
Geometric Methods for Sampling, Optimisation, Inference and Adaptive Agents
In this chapter, we identify fundamental geometric structures that underlie the problems of sampling, optimisation, inference and adaptive decision-making. Based on this identification, we derive
Epistemic Communities under Active Inference
The spread of ideas is a fundamental concern of today's news ecology. Understanding the dynamics of the spread of information and its co-option by interested parties is of critical importance.

References

SHOWING 1-10 OF 75 REFERENCES
PID Control as a Process of Active Inference with Linear Generative Models †
TLDR
This work will show how PID controllers can fit a more general theory of life and cognition under the principle of (variational) free energy minimisation when using approximate linear generative models of the world.
Simulating Active Inference Processes by Message Passing
TLDR
This work describes AI agents in a dynamic environment as probabilistic state space models (SSM) and performs inference for perception and control in these agents by message passing on a factor graph representation of the SSM and proposes a formal experimental protocol for simulated AI.
Modules or Mean-Fields?
TLDR
The argument here is that it is factorisation, as opposed to modularisation, that gives rise to the functional anatomy of the brain or, indeed, any sentient system.
Deep active inference agents using Monte-Carlo methods
TLDR
A neural architecture for building deep active inference agents operating in complex, continuous state-spaces using multiple forms of Monte-Carlo (MC) sampling, which enables agents to learn environmental dynamics efficiently, while maintaining task performance, in relation to reward-based counterparts.
Generalised free energy and active inference
TLDR
Two free energy functionals for active inference in the framework of Markov decision processes are compared and it is shown that policies are inferred or selected that realise prior preferences by minimising the free energy of future expectations.
Graphical Models, Exponential Families, and Variational Inference
TLDR
The variational approach provides a complementary alternative to Markov chain Monte Carlo as a general source of approximation methods for inference in large-scale statistical models.
Array programming with NumPy
TLDR
How a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring and analysing scientific data is reviewed.
A Step-by-Step Tutorial on Active Inference and its Application to Empirical Data
TLDR
A step-by-step tutorial on how to build POMDPs, run simulations using standard MATLAB routines, and fit these models to empirical data to provide all the tools necessary to use these models and to follow emerging advances in active inference research.
Active Inference: A Process Theory
TLDR
The fact that a gradient descent appears to be a valid description of neuronal activity means that variational free energy is a Lyapunov function for neuronal dynamics, which therefore conform to Hamilton’s principle of least action.
...
...