• Corpus ID: 235313527

Bayesian Inference for Gamma Models

@inproceedings{He2021BayesianIF,
  title={Bayesian Inference for Gamma Models},
  author={Jingyu He and Nicholas G. Polson and Jianeng Xu},
  year={2021}
}
We use the theory of normal variance-mean mixtures to derive a data augmentation scheme for models that include gamma functions. Our methodology applies to many situations in statistics and machine learning, including Multinomial-Dirichlet distributions, Negative binomial regression, Poisson-Gamma hierarchical models, Extreme value models, to name but a few. All of those models include a gamma function which does not admit a natural conjugate prior distribution providing a significant challenge… 

Figures and Tables from this paper

On Data Augmentation for Models Involving Reciprocal Gamma Functions

This paper introduces a new and efficient data augmentation approach to the posterior inference of the models with shape parameters when the reciprocal gamma function appears in full conditional densities by using Gauss's multiplication formula and Stirling’s formula for the gamma function.

References

SHOWING 1-10 OF 36 REFERENCES

Bayesian Inference for Logistic Models Using Pólya–Gamma Latent Variables

We propose a new data-augmentation strategy for fully Bayesian inference in models with binomial likelihoods. The approach appeals to a new class of Pólya–Gamma distributions, which are constructed

Inference with normal-gamma prior distributions in regression problems

This paper considers the efiects of placing an absolutely continuous prior distribution on the regression coe-cients of a linear model. We show that the posterior expectation is a matrix-shrunken

Fully Bayesian inference for neural models with negative-binomial spiking

A powerful data-augmentation framework for fully Bayesian inference in neural models with negative-binomial spiking that substantially outperforms Poisson regression on held-out data, and reveals latent structure underlying spike count correlations in simultaneously recorded spike trains.

Sequential Bayesian Analysis of Multivariate Count Data

We develop a new class of dynamic multivariate Poisson count models that allow for fast online updating and we refer to these models as multivariate Poisson-scaled beta (MPSB). The MPSB model allows

Dependent Multinomial Models Made Easy: Stick-Breaking with the Polya-gamma Augmentation

This work uses a logistic stick-breaking representation and recent innovations in Polya-gamma augmentation to reformulate the multinomial distribution in terms of latent variables with jointly Gaussian likelihoods, enabling it to take advantage of a host of Bayesian inference techniques for Gaussian models with minimal overhead.

EP-GIG Priors and Applications in Bayesian Sparse Learning

This paper defines such priors as a mixture of exponential power distributions with a generalized inverse Gaussian density (EP-GIG), a variant of generalized hyperbolic distributions, and shows that these algorithms bear an interesting resemblance to iteratively reweighted l2 or l1 methods.

Approximate Random Variate Generation from Infinitely Divisible Distributions with Applications to Bayesian Inference

SUMMARY Stochastic processes with independent increments play a central role in Bayesian nonparametric inference. The distributions of the increments of these processes, aside from fixed points of

Fast and Accurate Approximation of the Full Conditional for Gamma Shape Parameters

Abstract The gamma distribution arises frequently in Bayesian models, but there is not an easy-to-use conjugate prior for the shape parameter of a gamma. This inconvenience is usually dealt with by

Bayesian Analysis of Dynamic Linear Topic Models

It is demonstrated that sharing information across documents is critical for accurately estimating document-specific topic proportions and that explicitly modeling polynomial and periodic behavior improves the ability to predict topic prevalence at future time points.

Data augmentation for non-Gaussian regression models using variance-mean mixtures

We use the theory of normal variance-mean mixtures to derive a data-augmentation scheme for a class of common regularization problems. This generalizes existing theory on normal variance mixtures for