#### Filter Results:

- Full text PDF available (68)

#### Publication Year

1984

2017

- This year (1)
- Last 5 years (29)
- Last 10 years (63)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

#### Method

#### Organism

Learn More

- Hui Zou, Trevor Hastie
- 2004

We propose the elastic net, a new regularization and variable selection method. Real world data and a simulation study show that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation. In addition, the elastic net encourages a grouping effect, where strongly correlated predictors tend to be in or out of the model… (More)

- Hui ZOU
- 2006

The lasso is a popular technique for simultaneous estimation and variable selection. Lasso variable selection has been shown to be consistent under certain conditions. In this work we derive a necessary condition for the lasso variable selection to be consistent. Consequently, there exist certain scenarios where the lasso is inconsistent for variable… (More)

Principal component analysis (PCA) is widely used in data processing and dimensionality reduction. However, PCA suffers from the fact that each principal component is a linear combination of all the original variables, thus it is often difficult to interpret the results. We introduce a new method called sparse principal component analysis (SPCA) using the… (More)

- Ji Zhu, Hui Zou, Saharon Rosset, Trevor Hastie
- 2005

Boosting has been a very successful technique for solving the two-class classification problem. In going from two-class to multi-class classification, most algorithms have been restricted to reducing the multi-class classification problem to multiple two-class problems. In this paper, we develop a new algorithm that directly extends the AdaBoost algorithm… (More)

Fan & Li (2001) propose a family of variable selection methods via penalized likelihood using concave penalty functions. The nonconcave penalized likelihood estimators enjoy the oracle properties, but maximizing the penalized likelihood function is computationally challenging, because the objective function is nondifferentiable and nonconcave. In this… (More)

We study the degrees of freedom of the Lasso in the framework of Stein’s unbiased risk estimation (SURE). We show that the number of non-zero coefficients is an unbiased estimate for the degrees of freedom of the Lasso—a conclusion that requires no special assumption on the predictors. Our analysis also provides mathematical support for a related conjecture… (More)

- Zhanyun Tang, Yuxiao Sun, Sara E Harley, Hui Zou, Hongtao Yu
- Proceedings of the National Academy of Sciences…
- 2004

Sister chromatids in mammalian cells remain attached mostly at their centromeres at metaphase because of the loss of cohesion along chromosome arms in prophase. Here, we report that Bub1 retains centromeric cohesion in mitosis of human cells. Depletion of Bub1 or Shugoshin (Sgo1) in HeLa cells by RNA interference causes massive missegregation of sister… (More)

- Hui Zou, Hao Helen Zhang
- Annals of statistics
- 2009

We consider the problem of model selection and estimation in situations where the number of parameters diverges with the sample size. When the dimension is high, an ideal method should have the oracle property (Fan and Li, 2001; Fan and Peng, 2004) which ensures the optimal large sample performance. Furthermore, the high-dimensionality often induces the… (More)

MOTIVATION
The standard L(2)-norm support vector machine (SVM) is a widely used tool for microarray classification. Previous studies have demonstrated its superior performance in terms of classification accuracy. However, a major limitation of the SVM is that it cannot automatically select relevant genes for the classification. The L(1)-norm SVM is a… (More)

The standard L2-norm support vector machine (SVM) is a widely used tool for classification problems. The L1-norm SVM is a variant of the standard L2norm SVM, that constrains the L1-norm of the fitted coefficients. Due to the nature of the L1-norm, the L1-norm SVM has the property of automatically selecting variables, not shared by the standard L2-norm SVM.… (More)