Geometric Methods for Sampling, Optimisation, Inference and Adaptive Agents

@article{Barp2022GeometricMF,
  title={Geometric Methods for Sampling, Optimisation, Inference and Adaptive Agents},
  author={Alessandro Barp and Lancelot Da Costa and Guilherme Francca and Karl John Friston and Mark A. Girolami and M.I. Jordan and Grigorios A. Pavliotis},
  journal={ArXiv},
  year={2022},
  volume={abs/2203.10592}
}

Figures from this paper

Reward Maximisation through Discrete Active Inference

This paper shows the conditions under which active inference produces the optimal solution to the Bellman equation—a formulation that underlies several approaches to model-based reinforcement learning and control.

Targeted Separation and Convergence with Kernel Discrepancies

Maximum mean discrepancies (MMDs) like the kernel Stein discrepancy (KSD) have grown central to a wide range of applications, including hypothesis testing, sampler selection, distribution

Nesterov smoothing for sampling without smoothness

A novel sampling algorithm is proposed for a class of non-smooth potentials by approximating them by smooth potentials using a technique that is akin to Nesterov smoothing, and the accuracy of the algorithm is guaranteed.

Modelling non-reinforced preferences using selective attention

Nore is validated in a modified OpenAI Gym FrozenLake environment with and without volatility under a model of the environment—and is compared to Pepper, a Hebbian preference learning mechanism.

A Worked Example of the Bayesian Mechanics of Classical Objects

. Bayesian mechanics is a new approach to studying the mathematics and physics of interacting stochastic processes. In this note, we provide a worked example of a physical mechanics for classical

On Bayesian Mechanics: A Physics of and by Beliefs

A duality between the free energy principle and the constrained maximum entropy principle are examined, both of which lie at the heart of Bayesian mechanics.

Towards a Geometry and Analysis for Bayesian Mechanics

A simple case of Bayesian mechanics under the free energy principle is formulated in axiomatic terms, providing a related, but alternative, formalism to those driven purely by descriptions of random dynamical systems, and taking a further step towards a comprehensive statement of the physics of self-organisation in formal mathematical language.

Entropy-Maximising Diffusions Satisfy a Parallel Transport Law

. We show that the principle of maximum entropy, a variational method ap-pearing in statistical inference, statistical physics, and the analysis of stochastic dynamical systems, admits a geometric

References

SHOWING 1-10 OF 309 REFERENCES

Exponential convergence of Langevin distributions and their discrete approximations

In this paper we consider a continuous-time method of approximating a given distribution using the Langevin di€usion dLtˆdWt‡ 1 2 r log (Lt)dt. We ®nd conditions under this di€usion converges

Integral Probability Metrics and Their Generating Classes of Functions

  • A. Müller
  • Mathematics, Computer Science
    Advances in Applied Probability
  • 1997
A unified study of integral probability metrics of the following type are given and how some interesting properties of these probability metrics arise directly from conditions on the generating class of functions is shown.

Optimization on manifolds: A symplectic approach

There has been great interest in using tools from dynamical systems and numerical analysis of differential equations to understand and construct new optimization methods. In particular, recently a

On dissipative symplectic integration with applications to gradient-based optimization

A generalization of symplectic integrators to non-conservative and in particular dissipative Hamiltonian systems is able to preserve rates of convergence up to a controlled error, enabling the derivation of ‘rate-matching’ algorithms without the need for a discrete convergence analysis.

Deep active inference agents using Monte-Carlo methods

A neural architecture for building deep active inference agents operating in complex, continuous state-spaces using multiple forms of Monte-Carlo (MC) sampling, which enables agents to learn environmental dynamics efficiently, while maintaining task performance, in relation to reward-based counterparts.

Active Inference: Demystified and Compared

This letter aims to demystify the behavior of active inference agents by presenting an accessible discrete state-space and time formulation and demonstrating these behaviors in a OpenAI gym environment, alongside reinforcement learning agents.

THE VARIATIONAL FORMULATION OF THE FOKKER-PLANCK EQUATION

The Fokker--Planck equation, or forward Kolmogorov equation, describes the evolution of the probability density for a stochastic process associated with an Ito stochastic differential equation. It ...

Stochastic Processes and Applications: Diffusion Processes, the Fokker-Planck and Langevin Equations. Number volume 60 in Texts in Applied Mathematics

  • 2014

Active inference on discrete state-spaces: A synthesis

Riemann manifold Langevin and Hamiltonian Monte Carlo methods

  • M. GirolamiB. Calderhead
  • Computer Science
    Journal of the Royal Statistical Society: Series B (Statistical Methodology)
  • 2011
The methodology proposed automatically adapts to the local structure when simulating paths across this manifold, providing highly efficient convergence and exploration of the target density, and substantial improvements in the time‐normalized effective sample size are reported when compared with alternative sampling approaches.
...