#### Filter Results:

#### Publication Year

2002

2016

#### Publication Type

#### Co-author

#### Key Phrase

#### Publication Venue

#### Data Set Used

Learn More

This paper proposes a new approach to sparse-signal detection called the horseshoe estimator. We show that the horseshoe is a close cousin of the lasso in that it arises from the same class of multivariate scale mixtures of normals, but that it is almost universally superior to the double-exponential prior at handling sparsity. A theoretical framework is… (More)

This paper presents a general, fully Bayesian framework for sparse supervised-learning problems based on the horseshoe prior. The horseshoe prior is a member of the family of multivariate scale mixtures of normals, and is therefore closely related to widely used approaches for sparse Bayesian learning, including , among others, Laplacian priors (e.g. the… (More)

We describe a serial algorithm called feature-inclusion stochastic search, or FINCS, that uses online estimates of edge-inclusion probabilities to inform the process of Bayesian model determination in Gaussian graphical models. FINCS is compared to Metropolis-based search methods and found to be superior along a variety of dimensions , leading to more… (More)

This paper studies the multiplicity-correction effect of standard Bayesian variable-selection priors in linear regression. The first goal of the paper is to clarify when, and how, multiplicity correction is automatic in Bayesian analysis, and contrast this multiplicity correction with the Bayesian Ockham's-razor effect. Secondly, we contrast empirical-Bayes… (More)

- Nicholas G Polson, James G Scott, Jesse Windle
- 2014

We propose the Bayesian bridge estimator for regularized regression and classification. Two key mixture representations for the Bayesian bridge model are developed: a scale mixture of normal distributions with respect to an α-stable random variable; a mixture of Bartlett– Fejer kernels (or triangle densities) with respect to a two-component mixture of gamma… (More)

- James G Scott
- 2009

This paper describes a framework for flexible multiple hypothesis testing of autoregressive time series. The modeling approach is Bayesian, thought a blend of frequentist and Bayesian reasoning is used to evaluate procedures. Nonparametric characterizations of both the null and alternative hypotheses will be shown to be the key ro-bustification step… (More)

- Doyle E Patton, Kevin Duff, Mike R Schoenberg, James Mold, James G Scott, Russell L Adams
- The Clinical neuropsychologist
- 2003

Recent research suggests that cognitively normal African Americans are more likely to be misdiagnosed as impaired compared to Caucasians due to lower neuropsychological test scores (e.g., Manly et al., 1998). Given this, the present study sought to determine whether such racial discrepancies exist on the Repeatable Battery for the Assessment of… (More)

This paper presents a default model-selection procedure for Gaussian graphical models that involves two new developments. First, we develop an objective version of the hyper-inverse Wishart prior for restricted covariance matrices, called the HIW g-prior, and show how it corresponds to the implied fractional prior for covariance selection using fractional… (More)

- P Richard Hahn, Carlos M Carvalho, James G Scott
- 2010

This paper adapts sparse factor models for exploring covariation in multivariate binary data, with an application to measuring latent factors in U.S. Congressional roll-call voting patterns. We focus on the advantages of using formal probability models for inference in this context, drawing parallels with the seminal findings of Poole and Rosenthal (1991).… (More)

Characterizing the information carried by neural populations in the brain requires accurate statistical models of neural spike responses. The negative-binomial distribution provides a convenient model for over-dispersed spike counts, that is, responses with greater-than-Poisson variability. Here we describe a powerful data-augmentation framework for fully… (More)