Active Inference: A Process Theory

  title={Active Inference: A Process Theory},
  author={Karl J. Friston and Thomas H. B. FitzGerald and Francesco Rigoli and Philipp Schwartenbeck and Giovanni Pezzulo},
  journal={Neural Computation},
This article describes a process theory based on active inference and belief propagation. Starting from the premise that all neuronal processing (and action selection) can be explained by maximizing Bayesian model evidence—or minimizing variational free energy—we ask whether neuronal responses can be described as a gradient descent on variational free energy. Using a standard (Markov decision process) generative model, we derive the neuronal dynamics implicit in this description and reproduce a… 

Neural Dynamics under Active Inference: Plausibility and Efficiency of Information Processing

It is shown that neural dynamics under active inference are metabolically efficient and suggest that neural representations in biological agents may evolve by approximating steepest descent in information space towards the point of optimal inference.

Natural selection finds natural gradient

The results show that active inference is consistent with state-of-the-art models of neuronal dynamics and coincides with the natural gradient, which suggests that natural selection has implicitly approximated the steepest direction in information space; namely, natural gradient descent.

Impulsivity and Active Inference

This paper characterizes impulsive behavior using a patch-leaving paradigm and active inference—a framework for describing Bayes optimal behavior—and shows how manipulations change beliefs and subsequent choices through variational message passing.

Demystifying active inference

This review disambiguates properties of active inference, by providing a condensed overview of the theory underpinning active inference and noting that this formalism can be applied in other engineering applications; e.g., robotic arm movement, playing Atari games, etc., if appropriate underlying probability distributions can be formulated.

The graphical brain: Belief propagation and active inference

This paper formulate neuronal processing as belief propagation under deep generative models that can entertain both discrete and continuous states, leading to distinct schemes for belief updating that play out on the same (neuronal) architecture.

Generalised free energy and active inference

Two free energy functionals for active inference in the framework of Markov decision processes are compared and it is shown that policies are inferred or selected that realise prior preferences by minimising the free energy of future expectations.

Sophisticated Inference

A sophisticated kind of active inference using a recursive form of expected free energy, which effectively implements a deep tree search over actions and outcomes in the future over sequences of belief states as opposed to states per se.

Canonical neural networks perform active inference

This work considers a class of canonical neural networks comprising rate coding models, wherein neural activity and plasticity minimise a common cost function—and plasticity is modulated with a

Deep active inference agents using Monte-Carlo methods

A neural architecture for building deep active inference agents operating in complex, continuous state-spaces using multiple forms of Monte-Carlo (MC) sampling, which enables agents to learn environmental dynamics efficiently, while maintaining task performance, in relation to reward-based counterparts.

Branching Time Active Inference with Bayesian Filtering

This letter harnesses the efficiency of an alternative method for inference, Bayesian filtering, which does not require the iteration of the update equations until convergence of the variational free energy, and provides a forty times speedup over the state of the art.



The anatomy of choice: dopamine and decision-making

Variational Bayes is considered as a scheme that the brain might use for approximate Bayesian inference that optimizes a free energy bound on model evidence and changes in precision during variational updates are remarkably reminiscent of empirical dopaminergic responses.

The anatomy of choice: active inference and agency

Variational Bayes is considered as an alternative scheme that provides formal constraints on the computational anatomy of inference and action—constraints that are remarkably consistent with neuroanatomy.

Goal-directed decision making as probabilistic inference: a computational framework and potential neural correlates.

The basic proposal is that the brain, within an identifiable network of cortical and subcortical structures, implements a probabilistic generative model of reward, and that goal-directed decision making is effected through Bayesian inversion of this model.

Model averaging, optimal inference, and habit formation

It is hypothesized that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realizable within a plausible neuronal architecture, and a number of apparently suboptimal phenomena within the framework of approximate Bayesian inference are proposed.

Active Inference, Evidence Accumulation, and the Urn Task

It is shown that manipulating expected precision strongly affects how much information an agent characteristically samples, and thus provides a possible link between impulsivity and dopaminergic dysfunction.

Goal-directed decision making in prefrontal cortex: a computational framework

The theory provides a unifying framework for several different forms of goal-directed action selection, placing emphasis on a novel form, within which orbitofrontal reward representations directly drive policy selection.

A free energy principle for the brain

Towards a Mathematical Theory of Cortical Micro-circuits

This paper describes how Bayesian belief propagation in a spatio-temporal hierarchical model, called Hierarchical Temporal Memory (HTM), can lead to a mathematical model for cortical circuits and describes testable predictions that can be derived from the model.

Dopamine, reward learning, and active inference

An active inference scheme for solving Markov decision processes is extended to include learning, and it is shown that simulated dopamine dynamics strongly resemble those actually observed during instrumental conditioning.

The Dopaminergic Midbrain Encodes the Expected Certainty about Desired Outcomes

It is demonstrated that human subjects infer both optimal policies and the precision of those inferences, and thus support the notion that humans perform hierarchical probabilistic Bayesian inference.