Split-and-Augmented Gibbs Sampler—Application to Large-Scale Inference Problems
@article{Vono2019SplitandAugmentedGS, title={Split-and-Augmented Gibbs Sampler—Application to Large-Scale Inference Problems}, author={Maxime Vono and Nicolas Dobigeon and Pierre Chainais}, journal={IEEE Transactions on Signal Processing}, year={2019}, volume={67}, pages={1648-1661} }
This paper derives two new optimization-driven Monte Carlo algorithms inspired from variable splitting and data augmentation. In particular, the formulation of one of the proposed approaches is closely related to the alternating direction method of multipliers (ADMM) main steps. The proposed framework enables to derive faster and more efficient sampling schemes than the current state-of-the-art methods and can embed the latter. By sampling efficiently the parameter to infer as well as the…
Figures and Tables from this paper
20 Citations
Efficient Sampling through Variable Splitting-inspired Bayesian Hierarchical Models
- Computer ScienceICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2019
A new Bayesian hierarchical model to solve large scale inference problems by taking inspiration from variable splitting methods is proposed, which can lead to a faster sampling scheme than state-of-the-art methods by embedding them.
Efficient MCMC Sampling with Dimension-Free Convergence Rate using ADMM-type Splitting
- Computer ScienceJ. Mach. Learn. Res.
- 2022
A detailed theoretical study of a recent alternative class of MCMC schemes exploiting a splitting strategy akin to the one used by the celebrated ADMM optimization algorithm, known as the split Gibbs sampler.
SPARSE BAYESIAN BINARY LOGISTIC REGRESSION USING THE SPLIT-AND-AUGMENTED GIBBS SAMPLER
- Computer Science2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP)
- 2018
This paper tackles the sparse Bayesian binary logistic regression problem by relying on the recent split-and-augmented Gibbs sampler (SPA), which appears to be faster than efficient proximal MCMC algorithms and presents a reasonable computational cost compared to optimization-based methods with the advantage of producing credibility intervals.
Asymptotically Exact Data Augmentation: Models, Properties, and Algorithms
- Computer Science, MathematicsJ. Comput. Graph. Stat.
- 2021
A unified framework, coined asymptotically exact data augmentation (AXDA), which encompasses both well-established and more recent approximate augmented models is studied, which shows that AXDA models can benefit from interesting statistical properties and yield efficient inference algorithms.
On variable splitting for Markov chain Monte Carlo
- Computer Science
- 2019
This work takes inspiration from variable splitting idea in order to build efficient Markov chain Monte Carlo (MCMC) algorithms and illustrated on classical image processing and statistical learning problems.
High-Dimensional Gaussian Sampling: A Review and a Unifying Approach Based on a Stochastic Proximal Point Algorithm
- Computer ScienceSIAM Review
- 2022
This paper proposes a unifying Gaussian simulation framework by deriving a stochastic counterpart of the celebrated proximal point algorithm in optimization and offers a novel and unifying revisit of most of the existing MCMC approaches while extending them.
Optimized Population Monte Carlo
- Computer ScienceIEEE Transactions on Signal Processing
- 2022
This paper proposes a novel algorithm that exploits the benefits of the PMC framework and includes more efficient adaptive mechanisms, exploiting geometric information of the target distribution, and shows the successful performance of the proposed method in three numerical examples, involving challenging distributions.
Global Consensus Monte Carlo
- Computer ScienceJ. Comput. Graph. Stat.
- 2021
An instrumental hierarchical model associating auxiliary statistical parameters with each term, which are conditionally independent given the top-level parameters, leads to a distributed MCMC algorithm on an extended state space yielding approximations of posterior expectations.
A Data Augmentation Approach for Sampling Gaussian Models in High Dimension
- Computer Science2019 27th European Signal Processing Conference (EUSIPCO)
- 2019
DA sampling algorithms for Gaussian sampling for vibration analysis applications are reviewed and a DA method which is especially useful when direct sampling of the auxiliary variable is not straightforward from a computational viewpoint is proposed.
Accelerating proximal Markov chain Monte Carlo by using explicit stabilised methods
- MathematicsSIAM J. Imaging Sci.
- 2020
Comparisons with Euler-type proximal Monte Carlo methods confirm that the Markov chains generated with the proposed method exhibit significantly faster convergence speeds, achieve larger effective sample sizes, and produce lower mean square estimation errors at equal computational budget.
References
SHOWING 1-10 OF 54 REFERENCES
SPARSE BAYESIAN BINARY LOGISTIC REGRESSION USING THE SPLIT-AND-AUGMENTED GIBBS SAMPLER
- Computer Science2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP)
- 2018
This paper tackles the sparse Bayesian binary logistic regression problem by relying on the recent split-and-augmented Gibbs sampler (SPA), which appears to be faster than efficient proximal MCMC algorithms and presents a reasonable computational cost compared to optimization-based methods with the advantage of producing credibility intervals.
An Auxiliary Variable Method for Markov Chain Monte Carlo Algorithms in High Dimension
- Computer ScienceEntropy
- 2018
Experimental results indicate that adding the proposed auxiliary variables to the model makes the sampling problem simpler since the new conditional distribution no longer contains highly heterogeneous correlations, and the computational cost of each iteration of the Gibbs sampler is significantly reduced.
Efficient Gaussian Sampling for Solving Large-Scale Inverse Problems Using MCMC
- Computer Science, MathematicsIEEE Transactions on Signal Processing
- 2015
The main feature of the algorithm is to perform an approximate resolution of a linear system with a truncation level adjusted using a self-tuning adaptive scheme allowing to achieve the minimal computation cost per effective sample.
The Art of Data Augmentation
- Computer Science
- 2001
An effective search strategy is introduced that combines the ideas of marginal augmentation and conditional augmentation, together with a deterministic approximation method for selecting good augmentation schemes to obtain efficient Markov chain Monte Carlo algorithms for posterior sampling.
Auxiliary Variable Methods for Markov Chain Monte Carlo with Applications
- Computer Science
- 1998
Two applications in Bayesian image analysis are considered: a binary classification problem in which partial decoupling out performs Swendsen-Wang and single-site Metropolis methods, and a positron emission tomography reconstruction that uses the gray level prior of Geman and McClure.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Computer ScienceFound. Trends Mach. Learn.
- 2011
It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
A Survey of Stochastic Simulation and Optimization Methods in Signal Processing
- Computer ScienceIEEE Journal of Selected Topics in Signal Processing
- 2016
The paper addresses a variety of high-dimensional Markov chain Monte Carlo methods as well as deterministic surrogate methods, such as variational Bayes, the Bethe approach, belief and expectation propagation and approximate message passing algorithms.
Global Consensus Monte Carlo
- Computer ScienceJ. Comput. Graph. Stat.
- 2021
An instrumental hierarchical model associating auxiliary statistical parameters with each term, which are conditionally independent given the top-level parameters, leads to a distributed MCMC algorithm on an extended state space yielding approximations of posterior expectations.
Gradient Scan Gibbs Sampler: An Efficient Algorithm for High-Dimensional Gaussian Distributions
- Computer Science, MathematicsIEEE Journal of Selected Topics in Signal Processing
- 2016
An efficient algorithm is proposed that avoids the high dimensional Gaussian sampling and relies on a random excursion along a small set of directions and is proved to converge, i.e., the drawn samples are asymptotically distributed according to the target distribution.
Proximal Markov chain Monte Carlo algorithms
- Computer ScienceStat. Comput.
- 2016
This paper presents a new Metropolis-adjusted Langevin algorithm (MALA) that uses convex analysis to simulate efficiently from high-dimensional densities that are log-concave, a class of probability…