#### Filter Results:

- Full text PDF available (96)

#### Publication Year

1996

2017

- This year (2)
- Last 5 years (40)
- Last 10 years (85)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

#### Method

#### Organism

Learn More

The pattern of zero entries in the inverse covariance matrix of a multivariate normal distribution corresponds to conditional independence restrictions between variables. Covariance selection aims at estimating those structural zeros from data. We show that neighborhood selection with the Lasso is a computationally attractive alternative to standard… (More)

- Amela Prelic, Stefan Bleuler, +6 authors Eckart Zitzler
- Bioinformatics
- 2006

MOTIVATION
In recent years, there have been various efforts to overcome the limitations of standard clustering approaches for the analysis of gene expression data by grouping genes and samples simultaneously. The underlying concept, which is often referred to as biclustering, allows to identify sets of genes sharing compatible expression patterns across… (More)

- Jelle J. Goeman, Peter Bühlmann
- Bioinformatics
- 2007

MOTIVATION
Many statistical tests have been proposed in recent years for analyzing gene expression data in terms of gene sets, usually from Gene Ontology. These methods are based on widely different methodological assumptions. Some approaches test differential expression of each gene set against differential expression of the rest of the genes, whereas… (More)

The group lasso is an extension of the lasso to do variable selection on (predefined) groups of variables in linear regression models. The estimates have the attractive property of being invariant under groupwise orthogonal reparameterizations. We extend the group lasso to logistic regression models and present an efficient algorithm, that is especially… (More)

- Markus Kalisch, Peter Bühlmann
- Journal of Machine Learning Research
- 2007

We consider the PC-algorithm ([13]) for estimating the skeleton of a very high-dimensional acyclic directed graph (DAG) with corresponding Gaussian distribution. The PC-algorithm is computationally feasible for sparse problems with many nodes, i.e. variables, and it has the attractive property to automatically achieve high computational efficiency as a… (More)

We present a statistical perspective on boosting. Special emphasis is given to estimating potentially complex parametric or nonparametric models, including generalized linear and additive models as well as regression models for survival analysis. Concepts of degrees of freedom and corresponding Akaike or Bayesian information criteria, particularly useful… (More)

- Marcel Dettling, Peter Bühlmann
- Bioinformatics
- 2003

MOTIVATION
Microarray experiments generate large datasets with expression values for thousands of genes but not more than a few dozens of samples. Accurate supervised classification of tissue samples in such high-dimensional problems is difficult but often crucial for successful diagnosis and treatment. A promising way to meet this challenge is by using… (More)

- Sara A. van de Geer, Peter Bühlmann, Ruben Dezeure
- 2014

We propose a general method for constructing confidence intervals and statistical tests for single or low-dimensional components of a large parameter vector in a high-dimensional model. It can be easily adjusted for multiplicity taking dependence among tests into account. For linear models, our method is essentially the same as in Zhang and Zhang [J. R.… (More)

- Anja Wille, Philip Zimmermann, +10 authors Peter Bühlmann
- Genome Biology
- 2004

We present a novel graphical Gaussian modeling approach for reverse engineering of genetic regulatory networks with many genes and few observations. When applying our approach to infer a gene network for isoprenoid biosynthesis in Arabidopsis thaliana, we detect modules of closely connected genes and candidate genes for possible cross-talk between the… (More)

We propose a new sparsity-smoothness penalty for high-dimensional generalized additive models. The combination of sparsity and smoothness is crucial for mathematical theory as well as performance for finite-sample data. We present a computationally efficient algorithm, with provable numerical convergence properties, for optimizing the penalized likelihood.… (More)