• Publications
  • Influence
Greedy function approximation: A gradient boosting machine.
Function estimation/approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansionsExpand
  • 9,698
  • 1208
  • PDF
The Elements of Statistical Learning
  • 13,034
  • 1051
  • PDF
The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd Edition
This book describes the important ideas in these areas in a common conceptual framework. Expand
  • 13,459
  • 963
  • PDF
Regularization Paths for Generalized Linear Models via Coordinate Descent.
We develop fast algorithms for estimation of generalized linear models with convex penalties. Expand
  • 8,625
  • 682
  • PDF
Sparse inverse covariance estimation with the graphical lasso.
We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm--theExpand
  • 3,819
  • 545
  • PDF
Stochastic gradient boosting
Gradient boosting constructs additive regression models by sequentially fitting a simple parameterized function (base learner) to current "pseudo'-residuals by least squares at each iteration. TheExpand
  • 3,286
  • 332
  • PDF
Discussion of the Paper \additive Logistic Regression: a Statistical View of Boosting" By
The main and important contribution of this paper is in establishing a connection between boosting, a newcomer to the statistics scene, and additive models. One of the main properties of boostingExpand
  • 1,622
  • 327
Regularized Discriminant Analysis
Abstract Linear and quadratic discriminant analysis are considered in the small-sample, high-dimensional setting. Alternatives to the usual maximum likelihood (plug-in) estimates for the covarianceExpand
  • 2,183
  • 259
  • PDF
Special Invited Paper-Additive logistic regression: A statistical view of boosting
Boosting is one of the most important recent developments in classification methodology. Boosting works by sequentially applying a classification algorithm to reweighted versions of the training dataExpand
  • 4,075
  • 257
  • PDF
We consider “one-at-a-time” coordinate-wise descent algorithms for a class of convex optimization problems. An algorithm of this kind has been proposed for the L1-penalized regression (lasso) in theExpand
  • 1,761
  • 191
  • PDF