#### Filter Results:

- Full text PDF available (15)

#### Publication Year

2010

2017

- This year (1)
- Last 5 years (16)
- Last 10 years (19)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

- Debdeep Pati, David B. Dunson, Surya T. Tokdar
- J. Multivariate Analysis
- 2013

A wide variety of priors have been proposed for nonparametric Bayesian estimation of conditional distributions, and there is a clear need for theorems providing conditions on the prior for large support, as well as posterior consistency. Estimation of an uncountable collection of conditional distributions across different regions of the predictor space is a… (More)

- Anirban Bhattacharya, Debdeep Pati, Natesh S Pillai, David B Dunson
- Journal of the American Statistical Association
- 2015

Penalized regression methods, such as L1 regularization, are routinely used in high-dimensional applications, and there is a rich literature on optimality properties under sparsity assumptions. In the Bayesian paradigm, sparsity is routinely induced through two-component mixture priors having a probability mass at zero, but such priors encounter daunting… (More)

Sparse Bayesian factor models are routinely implemented for parsimonious dependence modeling and dimensionality reduction in highdimensional applications. We provide theoretical understanding of such Bayesian procedures in terms of posterior convergence rates in inferring high-dimensional covariance matrices where the dimension can be potentially larger… (More)

- Anirban Bhattacharya, Debdeep Pati, David Dunson
- Annals of statistics
- 2014

In nonparametric regression problems involving multiple predictors, there is typically interest in estimating an anisotropic multivariate regression surface in the important predictors while discarding the unimportant ones. Our focus is on defining a Bayesian procedure that leads to the minimax optimal rate of posterior contraction (up to a log factor)… (More)

- DEBDEEP PATI, DAVID DUNSON
- 2011

Non-linear latent variable models have become increasingly popular in a variety of applications. However, there has been little study on theoretical properties of these models. In this article, we study rates of posterior contraction in univariate density estimation for a class of non-linear latent variable models where unobserved U(0, 1) latent variables… (More)

- Zhengwu Zhang, Debdeep Pati, Anuj Srivastava
- ArXiv
- 2014

Unsupervised clustering of curves according to their shapes is an important problem with broad scientific applications. The existing model-based clustering techniques either rely on simple probability models (e.g., Gaussian) that are not generally valid for shape analysis or assume the number of clusters. We develop an efficient Bayesian method to cluster… (More)

We consider geostatistical models that allow the locations at which data are collected to be informative about the outcomes. Diggle et al. [2009] refer to this problem as preferential sampling, though we use the term informative sampling to highlight the relationship with the longitudinal data literature on informative observation times. In the longitudinal… (More)

In nonparametric regression problems involving multiple predictors, there is typically interest in estimating the multivariate regression surface in the important predictors while discarding the unimportant ones. Our focus is on defining a Bayesian procedure that leads to the minimax optimal rate of posterior contraction (up to a log factor) adapting to the… (More)

- Debdeep Pati, David B Dunson
- Annals of the Institute of Statistical…
- 2014

We consider the problem of robust Bayesian inference on the mean regression function allowing the residual density to change flexibly with predictors. The proposed class of models is based on a Gaussian process prior for the mean regression function and mixtures of Gaussians for the collection of residual densities indexed by predictors. Initially… (More)

Penalized regression methods, such as L1 regularization, are routinely used in high-dimensional applications, and there is a rich literature on optimality properties under sparsity assumptions. In the Bayesian paradigm, sparsity is routinely induced through two-component mixture priors having a probability mass at zero, but such priors encounter daunting… (More)