Learn More
The goal of this paper is to describe conditions which guarantee a central limit theorem for functionals of general state space Markov chains. This is done with a view towards Markov chain Monte Carlo settings and hence the focus is on the connections between drift and mixing conditions and their implications. In particular, we consider three commonly cited(More)
The expectation–maximization (EM) algorithm is a popular tool for maximizing likelihood functions in the presence of missing data. Unfortunately, EM often requires the evaluation of analytically intractable and high dimensional integrals. The Monte Carlo EM (MCEM) algorithm is the natural extension of EM that employs Monte Carlo methods to estimate the(More)
Markov chain Monte Carlo is a method of producing a correlated sample in order to estimate features of a target distribution via ergodic averages. A fundamental question is when should sampling stop? That is, when are the ergodic averages good estimates of the desired quantities? We consider a method that stops the simulation when the width of a confidence(More)
We consider evaluation of proper posterior distributions obtained from improper prior distributions. Our context is estimating a bounded function φ of a parameter when the loss is quadratic. If the posterior mean of φ is admissible for all bounded φ, the posterior is strongly admissible. We give sufficient conditions for strong admissibility. These(More)
Calculating a Monte Carlo standard error (MCSE) is an important step in the statistical analysis of the simulation output obtained from a Markov chain Monte Carlo experiment. An MCSE is usually based on an estimate of the variance of the asymptotic normal distribution. We consider spectral and batch means methods for estimating this variance. In particular,(More)
A common objective of fMRI (functional magnetic resonance imaging) studies is to determine subject-specific areas of increased blood oxygenation level dependent (BOLD) signal contrast in response to a stimulus or task, and hence to infer regional neuronal activity. We posit and investigate a Bayesian approach that incorporates spatial and temporal(More)
We consider two-component block Gibbs sampling for a Bayesian hierarchical version of the normal theory general linear model. This model is practically relevant in the sense that it is general enough to have many applications and in that it is not straightforward to sample directly from the corresponding posterior distribution. There are two possible orders(More)
1.1 Introduction Our goal is to introduce some of the tools useful for analyzing the output of a Markov chain Monte Carlo (MCMC) simulation. In particular, we focus on methods which allow the practitioner (and others!) to have confidence in the claims put forward. The following are the main issues we will address: (1) initial graphical assessment of MCMC(More)