Split-and-Augmented Gibbs Sampler—Application to Large-Scale Inference Problems

@article{Vono2019SplitandAugmentedGS,
  title={Split-and-Augmented Gibbs Sampler—Application to Large-Scale Inference Problems},
  author={Maxime Vono and Nicolas Dobigeon and Pierre Chainais},
  journal={IEEE Transactions on Signal Processing},
  year={2019},
  volume={67},
  pages={1648-1661}
}
This paper derives two new optimization-driven Monte Carlo algorithms inspired from variable splitting and data augmentation. In particular, the formulation of one of the proposed approaches is closely related to the alternating direction method of multipliers (ADMM) main steps. The proposed framework enables to derive faster and more efficient sampling schemes than the current state-of-the-art methods and can embed the latter. By sampling efficiently the parameter to infer as well as the… 
Efficient Sampling through Variable Splitting-inspired Bayesian Hierarchical Models
TLDR
A new Bayesian hierarchical model to solve large scale inference problems by taking inspiration from variable splitting methods is proposed, which can lead to a faster sampling scheme than state-of-the-art methods by embedding them.
Efficient MCMC Sampling with Dimension-Free Convergence Rate using ADMM-type Splitting
TLDR
A detailed theoretical study of a recent alternative class of MCMC schemes exploiting a splitting strategy akin to the one used by the celebrated ADMM optimization algorithm, known as the split Gibbs sampler.
SPARSE BAYESIAN BINARY LOGISTIC REGRESSION USING THE SPLIT-AND-AUGMENTED GIBBS SAMPLER
TLDR
This paper tackles the sparse Bayesian binary logistic regression problem by relying on the recent split-and-augmented Gibbs sampler (SPA), which appears to be faster than efficient proximal MCMC algorithms and presents a reasonable computational cost compared to optimization-based methods with the advantage of producing credibility intervals.
Asymptotically Exact Data Augmentation: Models, Properties, and Algorithms
TLDR
A unified framework, coined asymptotically exact data augmentation (AXDA), which encompasses both well-established and more recent approximate augmented models is studied, which shows that AXDA models can benefit from interesting statistical properties and yield efficient inference algorithms.
On variable splitting for Markov chain Monte Carlo
TLDR
This work takes inspiration from variable splitting idea in order to build efficient Markov chain Monte Carlo (MCMC) algorithms and illustrated on classical image processing and statistical learning problems.
High-Dimensional Gaussian Sampling: A Review and a Unifying Approach Based on a Stochastic Proximal Point Algorithm
TLDR
This paper proposes a unifying Gaussian simulation framework by deriving a stochastic counterpart of the celebrated proximal point algorithm in optimization and offers a novel and unifying revisit of most of the existing MCMC approaches while extending them.
Optimized Population Monte Carlo
TLDR
This paper proposes a novel algorithm that exploits the benefits of the PMC framework and includes more efficient adaptive mechanisms, exploiting geometric information of the target distribution, and shows the successful performance of the proposed method in three numerical examples, involving challenging distributions.
Global Consensus Monte Carlo
TLDR
An instrumental hierarchical model associating auxiliary statistical parameters with each term, which are conditionally independent given the top-level parameters, leads to a distributed MCMC algorithm on an extended state space yielding approximations of posterior expectations.
A Data Augmentation Approach for Sampling Gaussian Models in High Dimension
TLDR
DA sampling algorithms for Gaussian sampling for vibration analysis applications are reviewed and a DA method which is especially useful when direct sampling of the auxiliary variable is not straightforward from a computational viewpoint is proposed.
Accelerating proximal Markov chain Monte Carlo by using explicit stabilised methods
TLDR
Comparisons with Euler-type proximal Monte Carlo methods confirm that the Markov chains generated with the proposed method exhibit significantly faster convergence speeds, achieve larger effective sample sizes, and produce lower mean square estimation errors at equal computational budget.
...
1
2
...

References

SHOWING 1-10 OF 54 REFERENCES
SPARSE BAYESIAN BINARY LOGISTIC REGRESSION USING THE SPLIT-AND-AUGMENTED GIBBS SAMPLER
TLDR
This paper tackles the sparse Bayesian binary logistic regression problem by relying on the recent split-and-augmented Gibbs sampler (SPA), which appears to be faster than efficient proximal MCMC algorithms and presents a reasonable computational cost compared to optimization-based methods with the advantage of producing credibility intervals.
An Auxiliary Variable Method for Markov Chain Monte Carlo Algorithms in High Dimension
TLDR
Experimental results indicate that adding the proposed auxiliary variables to the model makes the sampling problem simpler since the new conditional distribution no longer contains highly heterogeneous correlations, and the computational cost of each iteration of the Gibbs sampler is significantly reduced.
Efficient Gaussian Sampling for Solving Large-Scale Inverse Problems Using MCMC
TLDR
The main feature of the algorithm is to perform an approximate resolution of a linear system with a truncation level adjusted using a self-tuning adaptive scheme allowing to achieve the minimal computation cost per effective sample.
The Art of Data Augmentation
TLDR
An effective search strategy is introduced that combines the ideas of marginal augmentation and conditional augmentation, together with a deterministic approximation method for selecting good augmentation schemes to obtain efficient Markov chain Monte Carlo algorithms for posterior sampling.
Auxiliary Variable Methods for Markov Chain Monte Carlo with Applications
TLDR
Two applications in Bayesian image analysis are considered: a binary classification problem in which partial decoupling out performs Swendsen-Wang and single-site Metropolis methods, and a positron emission tomography reconstruction that uses the gray level prior of Geman and McClure.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
TLDR
It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
A Survey of Stochastic Simulation and Optimization Methods in Signal Processing
TLDR
The paper addresses a variety of high-dimensional Markov chain Monte Carlo methods as well as deterministic surrogate methods, such as variational Bayes, the Bethe approach, belief and expectation propagation and approximate message passing algorithms.
Global Consensus Monte Carlo
TLDR
An instrumental hierarchical model associating auxiliary statistical parameters with each term, which are conditionally independent given the top-level parameters, leads to a distributed MCMC algorithm on an extended state space yielding approximations of posterior expectations.
Gradient Scan Gibbs Sampler: An Efficient Algorithm for High-Dimensional Gaussian Distributions
TLDR
An efficient algorithm is proposed that avoids the high dimensional Gaussian sampling and relies on a random excursion along a small set of directions and is proved to converge, i.e., the drawn samples are asymptotically distributed according to the target distribution.
Proximal Markov chain Monte Carlo algorithms
This paper presents a new Metropolis-adjusted Langevin algorithm (MALA) that uses convex analysis to simulate efficiently from high-dimensional densities that are log-concave, a class of probability
...
1
2
3
4
5
...