#### Filter Results:

- Full text PDF available (214)

#### Publication Year

1979

2017

#### Publication Type

#### Co-author

#### Publication Venue

#### Brain Region

#### Cell Type

#### Data Set Used

#### Key Phrases

#### Method

#### Organism

Learn More

- Trevor J. Hastie, Robert Tibshirani, Jerome H. Friedman
- Springer series in statistics
- 2009

In the words of the authors, the goal of this book was to " bring together many of the important new ideas in learning, and explain them in a statistical framework. " The authors have been quite successful in achieving this objective and their work will be a welcome addition to the statistics and learning literatures. Statistics has always been an… (More)

Regression models play an important role in many applied settings, providing prediction and classiication rules, and data analytic tools for understanding the interactive behaviour of diierent variables. Although attractively simple, the traditional linear model often fails in these situations: in real life eeects are generally not linear. This article… (More)

Boosting (Freund & Schapire 1996, Schapire & Singer 1998) is one of the most important recent developments in classiication methodology. The performance of many classiication algorithms can often be dramatically improved by sequentially applying them to reweighted versions of the input data, and taking a weighted majority vote of the sequence of classiiers… (More)

The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a… (More)

- Hui Zou, Trevor Hastie
- 2004

We propose the elastic net, a new regularization and variable selection method. Real world data and a simulation study show that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation. In addition, the elastic net encourages a grouping effect, where strongly correlated predictors tend to be in or out of the model… (More)

- Jerome Friedman, Trevor Hastie, Rob Tibshirani
- Journal of statistical software
- 2010

We develop fast algorithms for estimation of generalized linear models with convex penalties. The models include linear regression, two-class logistic regression, and multinomial regression problems while the penalties include ℓ(1) (the lasso), ℓ(2) (ridge regression) and mixtures of the two (the elastic net). The algorithms use cyclical coordinate descent,… (More)

- Jerome Friedman, Trevor Hastie, Robert Tibshirani
- Biostatistics
- 2008

We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm--the graphical lasso--that is remarkably fast: It solves a 1000-node problem ( approximately 500,000 parameters) in at most a minute and is 30-4000 times faster… (More)

Principal component analysis (PCA) is widely used in data processing and dimensionality reduction. However, PCA suffers from the fact that each principal component is a linear combination of all the original variables, thus it is often difficult to interpret the results. We introduce a new method called sparse principal component analysis (SPCA) using the… (More)

- J Elith, J R Leathwick, T Hastie
- The Journal of animal ecology
- 2008

1. Ecologists use statistical models for both explanation and prediction, and need techniques that are flexible enough to express typical features of their data, such as nonlinearities and interactions. 2. This study provides a working guide to boosted regression trees (BRT), an ensemble method for fitting statistical models that differs fundamentally from… (More)

- Robert Tibshirani, Trevor Hastie, Balasubramanian Narasimhan, Gilbert Chu
- Proceedings of the National Academy of Sciences…
- 2002

We have devised an approach to cancer class prediction from gene expression profiling, based on an enhancement of the simple nearest prototype (centroid) classifier. We shrink the prototypes and hence obtain a classifier that is often more accurate than competing methods. Our method of "nearest shrunken centroids" identifies subsets of genes that best… (More)