#### Filter Results:

- Full text PDF available (185)

#### Publication Year

1974

2017

- This year (3)
- Last 5 years (91)
- Last 10 years (142)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

#### Method

#### Organism

Learn More

We present a new class of methods for high-dimensional non-parametric regression and classification called sparse additive models (SpAM). Our methods combine ideas from sparse linear modeling and additive nonparametric regression. We derive an algorithm for fitting the models that is practical and effective even when the number of covariates is larger than… (More)

We give conditions that guarantee that the posterior probability of every Hellinger neighborhood of the true density tends to 1 almost surely. The conditions are (i) a smoothness condition on the prior and (ii) a requirement that the prior put positive mass in appropriate neighborhoods of the true density. The results are based on the idea of approximating… (More)

- Han Liu, Fang Han, Ming Yuan, John D. Lafferty, Larry A. Wasserman
- ICML
- 2012

In this paper, we propose a semiparametric approach, named nonparanor-mal skeptic, for efficiently and robustly estimating high dimensional undirected graph-ical models. To achieve modeling flexibility, we consider Gaussian Copula graphical models (or the nonparanormal) as proposed by Liu et al. (2009). To achieve estimation robustness, we exploit… (More)

- Han Liu, John D. Lafferty, Larry A. Wasserman
- Journal of Machine Learning Research
- 2009

Recent methods for estimating sparse undirected graphs for real-valued data in high dimensional problems rely heavily on the assumption of normality. We show how to use a semiparametric Gaus-sian copula—or " nonparanormal " —for high dimensional inference. Just as additive models extend linear models by replacing linear functions with a set of… (More)

- Shuheng Zhou, John D. Lafferty, Larry A. Wasserman
- Machine Learning
- 2008

Undirected graphs are often used to describe high dimensional distributions. Under sparsity conditions, the graph can be estimated using ℓ 1 penalization methods. However, current methods assume that the data are independent and identically distributed. If the distribution, and hence the graph, evolves over time then the data are not longer identically… (More)

We present a new class of models for high-dimensional nonparametric regression and classification called sparse additive models (SpAM). Our methods combine ideas from sparse linear modeling and additive nonparametric regression. We derive a method for fitting the models that is effective even when the number of covariates is larger than the sample size. A… (More)

We present a method for multiple hypothesis testing that maintains control of the False Discovery Rate while incorporating prior information about the hypotheses. The prior information takes the form of p-value weights. If the assignment of weights is positively associated with the null hypotheses being false, the procedure improves power, except in cases… (More)

One goal of statistical privacy research is to construct a data release mechanism that protects individual privacy while preserving information content. An example is a random mechanism that takes an input database X and outputs a random database Z according to a distribution Q n (·|X). Differential privacy is a particular privacy requirement developed by… (More)

This paper reviews the Bayesian approach to model selection and model averaging. In this review, I emphasize objective Bayesian methods based on noninformative priors. I will also discuss implementation details, approximations , and relationships to other methods. 2000 Academic Press

- Larry Wasserman, Kathryn Roeder
- Annals of statistics
- 2009

This paper explores the following question: what kind of statistical guarantees can be given when doing variable selection in high dimensional models? In particular, we look at the error rates and power of some multi-stage regression methods. In the first stage we fit a set of candidate models. In the second stage we select one model by cross-validation. In… (More)