#### Filter Results:

- Full text PDF available (162)

#### Publication Year

1982

2017

- This year (8)
- Last 5 years (68)
- Last 10 years (156)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Cell Type

#### Data Set Used

#### Key Phrases

#### Method

#### Organism

Learn More

We propose a new method for estimation in linear models. The \lasso" minimizes the residual sum of squares subject to the sum of the absolute value of the coeecients being less than a constant. Because of the nature of this constraint it tends to produce some coeecients that are exactly zero and hence gives interpretable models. Our simulation studies… (More)

- Trevor J. Hastie, Robert Tibshirani, Jerome H. Friedman
- Springer series in statistics
- 2009

In the words of the authors, the goal of this book was to " bring together many of the important new ideas in learning, and explain them in a statistical framework. " The authors have been quite successful in achieving this objective and their work will be a welcome addition to the statistics and learning literatures. Statistics has always been an… (More)

- T Hastie, R Tibshirani
- Statistical methods in medical research
- 1995

This article reviews flexible statistical methods that are useful for characterizing the effect of potential prognostic factors on disease endpoints. Applications to survival models and binary outcome models are illustrated.

- John D Storey, Robert Tibshirani
- Proceedings of the National Academy of Sciences…
- 2003

With the increase in genomewide experiments and the sequencing of multiple genomes, the analysis of large data sets has become commonplace in biology. It is often the case that thousands of features in a genomewide data set are tested against some null hypothesis, where a number of features are expected to be significant. Here we propose an approach to… (More)

Boosting (Freund & Schapire 1996, Schapire & Singer 1998) is one of the most important recent developments in classiication methodology. The performance of many classiication algorithms can often be dramatically improved by sequentially applying them to reweighted versions of the input data, and taking a weighted majority vote of the sequence of classiiers… (More)

- Jerome Friedman, Trevor Hastie, Robert Tibshirani
- Biostatistics
- 2008

We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm--the graphical lasso--that is remarkably fast: It solves a 1000-node problem ( approximately 500,000 parameters) in at most a minute and is 30-4000 times faster… (More)

- T Sørlie, C M Perou, +14 authors A L Børresen-Dale
- Proceedings of the National Academy of Sciences…
- 2001

The purpose of this study was to classify breast carcinomas based on variations in gene expression patterns derived from cDNA microarrays and to correlate tumor characteristics to clinical outcome. A total of 85 cDNA microarray experiments representing 78 cancers, three fibroadenomas, and four normal breast tissues were analyzed by hierarchical clustering.… (More)

- A A Alizadeh, M B Eisen, +28 authors L M Staudt
- Nature
- 2000

Diffuse large B-cell lymphoma (DLBCL), the most common subtype of non-Hodgkin's lymphoma, is clinically heterogeneous: 40% of patients respond well to current therapy and have prolonged survival, whereas the remainder succumb to the disease. We proposed that this variability in natural history reflects unrecognized molecular heterogeneity in the tumours.… (More)

- Robert Tibshirani, Trevor Hastie, Balasubramanian Narasimhan, Gilbert Chu
- Proceedings of the National Academy of Sciences…
- 2002

We have devised an approach to cancer class prediction from gene expression profiling, based on an enhancement of the simple nearest prototype (centroid) classifier. We shrink the prototypes and hence obtain a classifier that is often more accurate than competing methods. Our method of "nearest shrunken centroids" identifies subsets of genes that best… (More)

Principal component analysis (PCA) is widely used in data processing and dimensionality reduction. However, PCA suffers from the fact that each principal component is a linear combination of all the original variables, thus it is often difficult to interpret the results. We introduce a new method called sparse principal component analysis (SPCA) using the… (More)