Share This Author
High-dimensional graphs and variable selection with the Lasso
It is shown that neighborhood selection with the Lasso is a computationally attractive alternative to standard covariance selection for sparse high-dimensional graphs and is hence equivalent to variable selection for Gaussian linear models.
On asymptotically optimal confidence regions and tests for high-dimensional models
A general method for constructing confidence intervals and statistical tests for single or low-dimensional components of a large parameter vector in a high-dimensional model and develops the corresponding theory which includes a careful analysis for Gaussian, sub-Gaussian and bounded correlated designs.
BOOSTING ALGORITHMS: REGULARIZATION, PREDICTION AND MODEL FITTING
We present a statistical perspective on boosting. Special emphasis is given to estimating potentially complex parametric or nonparametric models, including generalized linear and additive models as…
Causal inference by using invariant prediction: identification and confidence intervals
This work proposes to exploit invariance of a prediction under a causal model for causal inference: given different experimental settings (e.g. various interventions) the authors collect all models that do show invariance in their predictive accuracy across settings and interventions, and yields valid confidence intervals for the causal relationships in quite general scenarios.
On the conditions used to prove oracle results for the Lasso
Oracle inequalities and variable selection properties for the Lasso in linear models have been established under a variety of different assumptions on the design matrix. We show in this paper how the…
Boosting for high-dimensional linear models
- Peter Buhlmann
- Computer Science
- 30 June 2006
We prove that boosting with the squared error loss, L 2 Boosting, is consistent for very high-dimensional linear models, where the number of predictor variables is allowed to grow essentially as fast…
Estimating high-dimensional intervention effects from observational data
This paper proposes to use summary measures of the set of possible causal effects to determine variable importance and uses the minimum absolute value of this set, since that is a lower bound on the size of the causal effect.
High-dimensional additive modeling
A computationally efficient algorithm, with provable numerical convergence properties, is presented, for optimizing the penalized likelihood of a new sparsity-smoothness penalty for high-dimensional generalized additive models.
Statistical significance in high-dimensional linear models
- Peter Buhlmann
- Computer Science, Mathematics
- 7 February 2012
This work proposes a method for constructing p -values for general hypotheses in a high-dimensional linear model based on Ridge estimation with an additional correction term due to a substantial projection bias in high dimensions.
Identifiability of Gaussian structural equation models with equal error variances
This work proves full identifiability in the case where all noise variables have the same variance: the directed acyclic graph can be recovered from the joint Gaussian distribution.