# Trace-class Monte Carlo Markov chains for Bayesian multivariate linear regression with non-Gaussian errors

@article{Qin2018TraceclassMC,
title={Trace-class Monte Carlo Markov chains for Bayesian multivariate linear regression with non-Gaussian errors},
author={Qian Qin and James P. Hobert},
journal={J. Multivar. Anal.},
year={2018},
volume={166},
pages={335-345}
}
• Published 30 January 2016
• Mathematics, Computer Science
• J. Multivar. Anal.
Let $\pi$ denote the intractable posterior density that results when the likelihood from a multivariate linear regression model with errors from a scale mixture of normals is combined with the standard non-informative prior. There is a simple data augmentation algorithm (based on latent data from the mixing density) that can be used to explore $\pi$. Let $h(\cdot)$ and $d$ denote the mixing density and the dimension of the regression model, respectively. Hobert et al. (2016) [arXiv:1506.03113v2…
5 Citations

## Topics from this paper

Convergence Analysis of MCMC Algorithms for Bayesian Multivariate Linear Regression with Non‐Gaussian Errors
• Mathematics
• 2018
When Gaussian errors are inappropriate in a multivariate linear regression setting, it is often assumed that the errors are iid from a distribution that is a scale mixture of multivariate normals.
Uncertainty Quantification for Modern High-Dimensional Regression via Scalable Bayesian Methods
• Computer Science
Journal of Computational and Graphical Statistics
• 2018
It is demonstrated that the proposed class of two-step blocked samplers exhibits vastly superior convergence behavior compared to the original three-step sampler in high-dimensional regimes on simulated data as well as data from a variety of applications including gene expression data, infrared spectroscopy data, and socio-economic/law enforcement data.
Consistent estimation of the spectrum of trace class Data Augmentation algorithms
• Mathematics, Computer Science
Bernoulli
• 2019
This paper proposes a novel method to consistently estimate the entire spectrum of a general class of Markov chains arising from a popular and widely used statistical approach known as Data Augmentation.
Estimating the spectral gap of a trace-class Markov operator
• Mathematics
Electronic Journal of Statistics
• 2019
The utility of a Markov chain Monte Carlo algorithm is, in large part, determined by the size of the spectral gap of the corresponding Markov operator. However, calculating (and even approximating)
Estimating accuracy of the MCMC variance estimator: Asymptotic normality for batch means estimators
• Saptarshi Chakraborty, Suman K. Bhattacharya
• Statistics & Probability Letters
• 2021

## References

SHOWING 1-10 OF 43 REFERENCES
Convergence Analysis of MCMC Algorithms for Bayesian Multivariate Linear Regression with Non‐Gaussian Errors
• Mathematics
• 2018
When Gaussian errors are inappropriate in a multivariate linear regression setting, it is often assumed that the errors are iid from a distribution that is a scale mixture of multivariate normals.
On Monte Carlo methods for Bayesian multivariate regression models with heavy-tailed errors
• Computer Science, Mathematics
J. Multivar. Anal.
• 2010
The new algorithm that is introduced is theoretically superior to the DA algorithm, yet equivalent to DA in terms of computational complexity, and it is proved that, under conditions on n, d, k, and the degrees of freedom of the t distribution, both algorithms converge at a geometric rate.
A spectral analytic comparison of trace-class data augmentation algorithms and their sandwich variants
• Mathematics
• 2011
The data augmentation (DA) algorithm is a widely used Markov chain Monte Carlo algorithm that is easy to implement but often suffers from slow convergence. The sandwich algorithm is an alternative
Improving the Convergence Properties of the Data Augmentation Algorithm with an Application to Bayesian Mixture Modeling
• Mathematics
• 2009
The reversible Markov chains that drive the data augmentation (DA) and sandwich algorithms define self-adjoint operators whose spectra encode the convergence properties of the algorithms. When the
Geometric ergodicity and the spectral gap of non-reversible Markov chains
• Mathematics
• 2009
We argue that the spectral theory of non-reversible Markov chains may often be more effectively cast within the framework of the naturally associated weighted-L∞ space $${L_\infty^V}$$ , instead of
Covariance structure of the Gibbs sampler with applications to the comparisons of estimators and augmentation schemes
• Mathematics
• 1994
SUMMARY We study the covariance structure of a Markov chain generated by the Gibbs sampler, with emphasis on data augmentation. When applied to a Bayesian missing data problem, the Gibbs sampler
Honest Exploration of Intractable Probability Distributions via Markov Chain Monte Carlo
• Mathematics
• 2001
Two important questions that must be answered whenever a Markov chain Monte Carlo (MCMC) algorithm is used are (Q1) What is an appropriate burn-in? and (Q2) How long should the sampling continue
Bayesian robust multivariate linear regression with incomplete data
Abstract The multivariate t distribution and other normal/independent multivariate distributions, such as the multivariate slash distribution and the multivariate contaminated distribution, are used
Multivariate Student -t Regression Models : Pitfalls and Inference
• Mathematics
• 1999
We consider likelihood-based inference from multivariate regression models with independent Student-t errors. Some very intruiging pitfalls of both Bayesian and classical methods on the basis of
Markov Chains for Exploring Posterior Distributions
Several Markov chain methods are available for sampling from a posterior distribution. Two important examples are the Gibbs sampler and the Metropolis algorithm. In addition, several strategies are