• Publications
  • Influence
Block Gibbs Sampling for Bayesian Random Effects Models With Improper Priors: Convergence and Regeneration
Bayesian versions of the classical one-way random effects model are widely used to analyze data. If the standard diffuse prior is adopted, there is a simple block Gibbs sampler that can be employedExpand
Estimates and Standard Errors for Ratios of Normalizing Constants from Multiple Markov Chains via Regeneration.
  • Hani Doss, Aixin Tan
  • Mathematics, Medicine
  • Journal of the Royal Statistical Society. Series…
  • 1 September 2014
In the classical biased sampling problem, we have k densities π1(·), …, πk (·), each known up to a normalizing constant, i.e. for l = 1, …, k, πl (·) = νl (·)/ml , where νl (·) is a known functionExpand
When is Eaton’s Markov chain irreducible?
Consider a parametric statistical model P(dx\0) and an improper prior distribution v(d0) that together yield a (proper) formal posterior distribution Q(d6\x). The prior is called strongly admissibleExpand
Bayesian inference for high‐dimensional linear regression under mnet priors
Abstract: For regression problems that involve many potential predictors, the Bayesian variable selection (BVS) method is a powerful tool, which associates each model with its posteriorExpand
Estimating standard errors for importance sampling estimators with multiple Markov chains
The naive importance sampling estimator, based on samples from a single importance density, can be numerically unstable. Instead, we consider generalized importance sampling estimators where samplesExpand
Honest Importance Sampling With Multiple Markov Chains
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importanceExpand
On the Geometric Ergodicity of Two-Variable Gibbs Samplers
A Markov chain is geometrically ergodic if it converges to its in- variant distribution at a geometric rate in total variation norm. We study geo- metric ergodicity of deterministic and random scanExpand
Supplement to “ Bayesian inference for high-dimensional linear regression under the mnet priors ”
Here, a q-dim binary vector γ = (γ1, . . . , γq) ∈ {0, 1}q =: Γ indicates a selected set of predictors, and βγ denotes the subvector of coefficients for the predictors selected by γ. Prior of the BVSExpand
Sandwich algorithms for Bayesian variable selection
TLDR
We propose a class of novel sandwich algorithms for Bayesian variable selection that improves upon the algorithm of Ghosh and Clyde and propose new methods based on sandwich algorithms that use Markov chains with faster convergence rates. Expand
...
1
2
...