#### Filter Results:

#### Publication Year

1998

2016

#### Co-author

#### Key Phrase

#### Publication Venue

Learn More

- Robert Tibshirani, Michael Saunders, Saharon Rosset, Ji Zhu, Keith Knight
- 2004

The lasso penalizes a least squares regression by the sum of the absolute values (L 1-norm) of the coefficients. The form of this penalty encourages sparse solutions (with many coefficients equal to 0). We propose the 'fused lasso', a generalization that is designed for problems with features that can be ordered in some meaningful way. The fused lasso… (More)

The standard 2-norm SVM is known for its good performance in two-class classi£cation. In this paper, we consider the 1-norm SVM. We argue that the 1-norm SVM may have some advantage over the standard 2-norm SVM, especially when there are redundant noise features. We also propose an ef£cient algorithm that computes the whole solution path of the 1-norm SVM,… (More)

In this paper we argue that the choice of the SVM cost parameter can be critical. We then derive an algorithm that can fit the entire path of SVM solutions for every value of the cost parameter, with essentially the same computational cost as fitting one SVM model.

- Ji Zhu, Hui Zou, Saharon Rosset, Trevor Hastie
- 2005

Boosting has been a very successful technique for solving the two-class classification problem. In going from two-class to multi-class classification, most algorithms have been restricted to reducing the multi-class classification problem to multiple two-class problems. In this paper, we develop a new algorithm that directly extends the AdaBoost algorithm… (More)

- Saharon Rosset, Ji Zhu, Efron, Johnstone Hastie, Tibshirani, Ann
- 2005

We consider the generic regularized optimization problemˆβ(λ) = arg min β L(y, Xβ) + λJ (β). have shown that for the LASSO—that is, if L is squared error loss and J (β) = =β 1 is the 1 norm of β—the optimal coefficient path is piecewise linear, that is, ∂ ˆ β(λ)/∂λ is piecewise constant. We derive a general characterization of the properties of (loss L,… (More)

In this paper we study boosting methods from a new perspective. We build on recent work by Efron et al. to show that boosting approximately (and in some cases exactly) minimizes its loss criterion with an l1 constraint on the coefficient vector. This helps understand the success of boosting with early stopping as regularized fitting of the loss criterion.… (More)

- Doron M Behar, Bayazit Yunusbayev, Mait Metspalu, Ene Metspalu, Saharon Rosset, Jüri Parik +15 others
- Nature
- 2010

Contemporary Jews comprise an aggregate of ethno-religious communities whose worldwide members identify with each other through various shared religious, historical and cultural traditions. Historical evidence suggests common origins in the Middle East, followed by migrations leading to the establishment of communities of Jews in Europe, Africa and Asia, in… (More)

- Trevor Hastie, Jonathan Taylor, Robert Tibshirani, Guenther Walther, Steven Boyd, Jerome Friedman +3 others
- 2006

We consider the least angle regression and forward stagewise algorithms for solving penalized least squares regression problems. In Efron, Hastie, Johnstone & Tibshirani (2004) it is proved that the least angle regression algorithm, with a small modification, solves the lasso regression problem. Here we give an analogous result for incremental forward… (More)

- Saharon Rosset
- 2004

Regularization plays a central role in the analysis of modern data, where non-regularized fitting is likely to lead to over-fitted models, useless for both prediction and interpretation. We consider the design of incremen-tal algorithms which follow paths of regularized solutions, as the regu-larization varies. These approaches often result in methods which… (More)