Share This Author
Sparsity and smoothness via the fused lasso
The fused lasso is proposed, a generalization that is designed for problems with features that can be ordered in some meaningful way, and is especially useful when the number of features p is much greater than N, the sample size.
Multi-class AdaBoost ∗
A new algorithm is proposed that naturally extends the original AdaBoost algorithm to the multiclass case without reducing it to multiple two-class problems and is extremely easy to implement and is highly competitive with the best currently available multi-class classification methods.
1-norm Support Vector Machines
The standard 2-norm SVM is known for its good performance in two-class classification. In this paper, we consider the 1-norm SVM. We argue that the 1-norm SVM may have some advantage over the…
The Entire Regularization Path for the Support Vector Machine
- T. Hastie, S. Rosset, R. Tibshirani, Ji Zhu
- MathematicsJournal of machine learning research
- 1 December 2004
An algorithm is derived that can fit the entire path of SVM solutions for every value of the cost parameter, with essentially the same computational cost as fitting one SVM model.
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
This paper recovers-in a precise quantitative way-several phenomena that have been observed in large-scale neural networks and kernel machines, including the "double descent" behavior of the prediction risk, and the potential benefits of overparametrization.
Piecewise linear regularized solution paths
We consider the generic regularized optimization problem β(λ) = argminβ L(y, Xβ) + λJ(β). Efron, Hastie, Johnstone and Tibshirani [Ann. Statist. 32 (2004) 407-499] have shown that for the LASSO-that…
A "Copernican" reassessment of the human mitochondrial DNA tree from its root.
Missense mutations in the APOL1 gene are highly associated with end stage kidney disease risk previously attributed to the MYH9 gene
This work uses recently released sequences from the 1000 Genomes Project to identify two western African-specific missense mutations in the neighboring APOL1 gene, and demonstrates that these are more strongly associated with ESKD than previously reported MYH9 variants.
Boosting as a Regularized Path to a Maximum Margin Classifier
It is built on recent work by Efron et al. to show that boosting approximately (and in some cases exactly) minimizes its loss criterion with an l1 constraint on the coefficient vector, and shows that as the constraint is relaxed the solution converges (in the separable case) to an "l1-optimal" separating hyper-plane.