• Publications
  • Influence
Model selection and estimation in regression with grouped variables
Summary.  We consider the problem of selecting grouped variables (factors) for accurate prediction in regression. Such a problem arises naturally in many practical situations with the multifactor
Model selection and estimation in the Gaussian graphical model
The implementation of the penalized likelihood methods for estimating the concentration matrix in the Gaussian graphical model is nontrivial, but it is shown that the computation can be done effectively by taking advantage of the efficient maxdet algorithm developed in convex optimization.
Composite quantile regression and the oracle Model Selection Theory
Coefficient estimation and variable selection in multiple linear regression is routinely done in the (penalized) least squares (LS) framework. The concept of model selection oracle introduced by Fan
High Dimensional Semiparametric Gaussian Copula Graphical Models
It is proved that the nonparanormal skeptic achieves the optimal parametric rates of convergence for both graph recovery and parameter estimation, and this result suggests that the NonParanormal graphical models can be used as a safe replacement of the popular Gaussian graphical models, even when the data are truly Gaussian.
High Dimensional Inverse Covariance Matrix Estimation via Linear Programming
  • M. Yuan
  • Computer Science, Mathematics
    J. Mach. Learn. Res.
  • 1 March 2010
An estimating procedure is proposed that can effectively exploit "sparsity" of the inverse covariance matrix and can be computed using linear programming and therefore has the potential to be used in very high dimensional problems.
A Reproducing Kernel Hilbert Space Approach to Functional Linear Regression
We study in this paper a smoothness regularization method for functional linear regression and provide a unified treatment for both the prediction and estimation problems. By developing a tool on
A direct approach to sparse discriminant analysis in ultra-high dimensions
The theory shows that the method proposed can consistently identify the subset of discriminative features contributing to the Bayes rule and at the same time consistently estimate theBayes classification direction, even when the dimension can grow faster than any polynomial order of the sample size.
Minimax and Adaptive Prediction for Functional Linear Regression
This article considers minimax and adaptive prediction with functional predictors in the framework of functional linear model and reproducing kernel Hilbert space and proposes an easily implementable data-driven roughness regularization predictor that is shown to attain the optimal rate of convergence adaptively without the need of knowing the covariance kernel.
On the non‐negative garrotte estimator
Summary.  We study the non‐negative garrotte estimator from three different aspects: consistency, computation and flexibility. We argue that the non‐negative garrotte is a general procedure that can
The goal is to establish oracle inequalities for the excess risk of the resulting prediction rule showing that the method is adaptive both to the unknown design distribution and to the sparsity of the problem.