Learn More
This paper proposes a new approach to sparse-signal detection called the horseshoe estimator. We show that the horseshoe is a close cousin of the lasso in that it arises from the same class of multivariate scale mixtures of normals, but that it is almost universally superior to the double-exponential prior at handling sparsity. A theoretical framework is(More)
Taylor & Francis makes every effort to ensure the accuracy of all the information (the " Content ") contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views(More)
This paper presents a general, fully Bayesian framework for sparse supervised-learning problems based on the horseshoe prior. The horseshoe prior is a member of the family of multivariate scale mixtures of normals, and is therefore closely related to widely used approaches for sparse Bayesian learning, including , among others, Laplacian priors (e.g. the(More)
We propose the Bayesian bridge estimator for regularized regression and classification. Two key mixture representations for the Bayesian bridge model are developed: a scale mixture of normal distributions with respect to an α-stable random variable; a mixture of Bartlett– Fejer kernels (or triangle densities) with respect to a two-component mixture of gamma(More)
Particle learning (PL) provides state filtering, sequential parameter learning and smoothing in a general class of state space models. Our approach extends existing particle methods by incorporating the estimation of static parameters via a fully-adapted filter that utilizes conditional sufficient statistics for parameters and/or states as particles. State(More)
This paper develops a simulation-based approach to sequential parameter learning and filtering in general state-space models. Our methodology is based on a rolling-window Markov chain Monte Carlo (MCMC) approach and can be easily implemented by modifying state-space smoothing algorithms. Furthermore, the filter avoids the degeneracies that hinder particle(More)
This paper develops particle learning (PL) methods for the estimation of general mixture models. The approach is distinguished from alternative particle filtering methods in two major ways. First, each iteration begins by resampling particles according to posterior predictive probability, leading to a more efficient set for propagation. Second, each(More)
This paper introduces an approach to estimation in possibly sparse data sets using shrinkage priors based upon the class of hypergeometric-beta distributions. These widely applicable priors turn out to be a four-parameter generalization of the beta family, and are pseudo-conjugate: they cannot themselves be expressed in closed form, but they do yield(More)
In this paper we develop proximal methods for statistical learning. Proximal point algorithms are useful in statistics and machine learning for obtaining optimization solutions for composite functions. Our approach exploits closed-form solutions of proximal operators and envelope representations based on the Moreau, Forward-Backward, Douglas-Rachford and(More)
In this paper, we develop a simulation-based approach for two-stage stochastic programs with recourse. We construct an augmented probability model with stochastic shocks and decision variables. Simulating from the augmented probability model solves for the expected recourse function and the optimal first-stage decision. Markov chain Monte Carlo methods,(More)
  • 1