#### Filter Results:

- Full text PDF available (133)

#### Publication Year

1992

2017

- This year (15)
- Last 5 years (110)
- Last 10 years (190)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Cell Type

#### Data Set Used

#### Key Phrases

#### Method

#### Organism

Learn More

The lasso penalizes a least squares regression by the sum of the absolute values (L1-norm) of the coefficients. The form of this penalty encourages sparse solutions (with many coefficients equal to 0).We propose the ‘fused lasso’, a generalization that is designed for problems with features that can be ordered in some meaningful way. The fused lasso… (More)

- Trevor J. Hastie, Saharon Rosset, Robert Tibshirani, Ji Zhu
- Journal of Machine Learning Research
- 2004

In this paper we argue that the choice of the SVM cost parameter can be critical. We then derive an algorithm that can fit the entire path of SVM solutions for every value of the cost parameter, with essentially the same computational cost as fitting one SVM model.

- Ji Zhu, Saharon Rosset, Trevor J. Hastie, Robert Tibshirani
- NIPS
- 2003

The standard 2-norm SVM is known for its good performance in twoclass classi£cation. In this paper, we consider the 1-norm SVM. We argue that the 1-norm SVM may have some advantage over the standard 2-norm SVM, especially when there are redundant noise features. We also propose an ef£cient algorithm that computes the whole solution path of the 1-norm SVM,… (More)

- Ji Zhu, Trevor J. Hastie
- NIPS
- 2001

The support vector machine (SVM) is known for its good performance in binary classification, but its extension to multi-class classification is still an on-going research issue. In this paper, we propose a new approach for classification, called the import vector machine (IVM), which is built on kernel logistic regression (KLR). We show that the IVM not… (More)

- Ji Zhu, Hui Zou, Saharon Rosset, Trevor Hastie
- 2005

Boosting has been a very successful technique for solving the two-class classification problem. In going from two-class to multi-class classification, most algorithms have been restricted to reducing the multi-class classification problem to multiple two-class problems. In this paper, we develop a new algorithm that directly extends the AdaBoost algorithm… (More)

The paper proposes a method for constructing a sparse estimator for the inverse covariance (concentration) matrix in high-dimensional settings. The estimator uses a penalized normal likelihood approach and forces sparsity by using a lasso-type penalty. We establish a rate of convergence in the Frobenius norm as both data dimension p and sample size n are… (More)

- JI ZHU
- 2005

We consider the generic regularized optimization problem β̂(λ) = arg minβ L(y,Xβ) + λJ (β). Efron, Hastie, Johnstone and Tibshirani [Ann. Statist. 32 (2004) 407–499] have shown that for the LASSO—that is, if L is squared error loss and J (β)= ‖β‖1 is the 1 norm of β—the optimal coefficient path is piecewise linear, that is, ∂β̂(λ)/∂λ is piecewise constant.… (More)

- Adam J Rothman, Elizaveta Levina, Ji Zhu
- Journal of computational and graphical statistics…
- 2010

We propose a procedure for constructing a sparse estimator of a multivariate regression coefficient matrix that accounts for correlation of the response variables. This method, which we call multivariate regression with covariance estimation (MRCE), involves penalized likelihood with simultaneous estimation of the regression coefficients and the covariance… (More)

- Saharon Rosset, Ji Zhu, Trevor J. Hastie
- Journal of Machine Learning Research
- 2004

In this paper we study boosting methods from a new perspective. We build on recent work by Efron et al. to show that boosting approximately (and in some cases exactly) minimizes its loss criterion with an l1 constraint on the coefficient vector. This helps understand the success of boosting with early stopping as regularized fitting of the loss criterion.… (More)

MOTIVATION
The standard L(2)-norm support vector machine (SVM) is a widely used tool for microarray classification. Previous studies have demonstrated its superior performance in terms of classification accuracy. However, a major limitation of the SVM is that it cannot automatically select relevant genes for the classification. The L(1)-norm SVM is a… (More)