• Corpus ID: 232233116

Sticky PDMP samplers for sparse and local inference problems

@inproceedings{Bierkens2021StickyPS,
  title={Sticky PDMP samplers for sparse and local inference problems},
  author={Joris Bierkens and Sebastiano Grazzi and Frank van der Meulen and Moritz Schauer},
  year={2021}
}
We construct a new class of efficient Monte Carlo methods based on continuous-time piecewise deterministic Markov processes (PDMPs) suitable for inference in high dimensional sparse models, i.e. models for which there is prior knowledge that many coordinates are likely to be exactly 0. This is achieved with the fairly simple idea of endowing existing PDMP samplers with “sticky” coordinate axes, coordinate planes etc. Upon hitting those subspaces, an event is triggered during which the process… 
Continuously-Tempered PDMP Samplers
TLDR
It is shown how tempering ideas can improve the mixing of PDMPs in such cases, and an extended distribution is introduced over the state of the posterior distribution and an inverse temperature, which interpolates between a tractable distribution when the inverse temperature is 0 and the posterior when the reverse temperature is 1.
PDMP Monte Carlo methods for piecewise-smooth densities
TLDR
A simple condition for the transition of the process at a discontinuity is presented which can be used to extend any existing sampler for smooth densities, and specific choices for this transition are given which work with popular algorithms such as the Bouncy Particle Sampler, the Coordinate Sampler and the Zig-Zag Process.
Concave-Convex PDMP-based sampling
TLDR
This work proposes the concave-convex adaptive thinning approach for simulating a piecewise deterministic Markov process (CC-PDMP), which is well suited to local PDMP simulation where known conditional independence of the target can be exploited for potentially huge computational gains.
Applied Measure Theory for Probabilistic Modeling
TLDR
The goal of MeasureTheory.jl is to provide Julia with the right vocabulary and tools for probability and statistical computing tasks, and introduces a well-chosen set of notions from the foundations of probability together with powerful combinators and transforms.
Posterior contraction for deep Gaussian process priors
TLDR
It is shown that the contraction rates can achieve the minimax convergence rate (up to log n factors), while being adaptive to the underlying structure and smoothness of the target function.

References

SHOWING 1-10 OF 46 REFERENCES
Reversible Jump PDMP Samplers for Variable Selection
TLDR
This work shows how to develop reversible jump PDMP samplers that can jointly explore the discrete space of models and the continuous space of parameters and shows that the new sampler can mix better than standard MCMC algorithms.
Concave-Convex PDMP-based sampling
TLDR
This work proposes the concave-convex adaptive thinning approach for simulating a piecewise deterministic Markov process (CC-PDMP), which is well suited to local PDMP simulation where known conditional independence of the target can be exploited for potentially huge computational gains.
The Boomerang Sampler
TLDR
This paper introduces the Boomerang Sampler as a novel class of continuous-time non-reversible Markov chain Monte Carlo algorithms and demonstrates theoretically and empirically that it can out-perform existing benchmark piecewise deterministic Markov processes such as the bouncy particle sampler and the Zig-Zag.
The Zig-Zag process and super-efficient sampling for Bayesian analysis of big data
TLDR
A new family of Monte Carlo methods based upon a multi-dimensional version of the Zig-Zag process of (Bierkens, Roberts, 2017), a continuous time piecewise deterministic Markov process is introduced.
The Bouncy Particle Sampler: A Nonreversible Rejection-Free Markov Chain Monte Carlo Method
TLDR
An alternative scheme recently introduced in the physics literature where the target distribution is explored using a continuous-time nonreversible piecewise-deterministic Markov process is explored, and several computationally efficient implementations of this Markov chain Monte Carlo schemes are proposed.
MCMC Using Hamiltonian Dynamics
Hamiltonian dynamics can be used to produce distant proposals for the Metropolis algorithm, thereby avoiding the slow exploration of the state space that results from the diffusive behaviour of
MCMC Methods for Functions: ModifyingOld Algorithms to Make Them Faster
TLDR
An approach to modifying a whole range of MCMC methods, applicable whenever the target measure has density with respect to a Gaussian process or Gaussian random field reference measure, which ensures that their speed of convergence is robust under mesh refinement.
Peskun-Tierney ordering for Markov chain and process Monte Carlo: beyond the reversible scenario
Historically time-reversibility of the transitions or processes underpinning Markov chain Monte Carlo methods (MCMC) has played a key r\^ole in their development, while the self-adjointness of
Reversible jump MCMC
Statistical problems where ‘the number of things you don’t know is one of the things you don’t know’ are ubiquitous in statistical modelling. They arise both in traditional modelling situations such
Reversible jump Markov chain Monte Carlo computation and Bayesian model determination
Markov chain Monte Carlo methods for Bayesian computation have until recently been restricted to problems where the joint distribution of all variables has a density with respect to some fixed
...
...