Sylvia Frühwirth-Schnatter

Learn More
Several new estimators of the marginal likelihood for complex non-Gaussian models are developed. These estimators make use of the output of auxiliary mixture sampling for count data and for binary and multinomial data. One of these estimators is based on combining Chib's estimator with data augmentation as in auxiliary mixture sampling, while the other(More)
The article proposes an improved method of auxiliary mixture sampling for count data, binomial data and multinomial data. In constrast to previously proposed samplers the method uses a limited number of latent variables per observation, independent of the intensity of the underlying Poisson process in the case of count data, or of the number of experiments(More)
We use neural networks (NN) as a tool for a nonlinear autoregression to predict the second moment of the conditional density of return series. The NN models are compared to the popular econometric GARCH(1,1) model. We estimate the models in a Bayesian framework using Markov chain Monte Carlo posterior simulations. The interlinked aspects of the proposed(More)
The goal of this article is an exact Bayesian analysis of the Heston (1993) stochastic volatility model. We carefully study the affect different parameterizations of the latent volatility process and the parameters of the volatility process have on the convergence and the mixing behavior of the sampler. We apply the sampler to simulated data and to some(More)
This article deals with binomial logit models where the parameters are estimated within a Bayesian framework. Such models arise, for instance, when repeated measurements are available for identical covariate patterns. To perform MCMC sampling, we rewrite the binomial logit model as an augmented model which involves some latent variables called random(More)
In the framework of Bayesian model-based clustering based on a finite mixture of Gaussian distributions, we present a joint approach to estimate the number of mixture components and identify cluster-relevant variables simultaneously as well as to obtain an identified model. Our approach consists in specifying sparse hierarchical priors on the mixture(More)