Learn More
—This paper examines the problem of locating outlier columns in a large, otherwise low-rank, matrix. We propose a simple two-step adaptive sensing and inference approach and establish theoretical guarantees for its performance; our results show that accurate outlier identification is achievable using very few linear summaries of the original data matrix –(More)
With the rapid development of both information technology and the management of modern medical regulation, the generation of medical records tends to be increasingly intelligent. In this paper, Case-Based Reasoning is applied to the process of generating records of dental cases. Based on the analysis of the features of dental records, a case base is(More)
We propose a stochastic variance reduced optimization algorithm for solving a class of large-scale nonconvex optimization problems with cardinality constraints, and provide sufficient conditions under which the proposed algorithm enjoys strong linear convergence guarantees and optimal estimation accuracy in high dimensions. We further extend our analysis to(More)
This paper describes an R package named flare, which implements a family of new high dimensional regression methods (LAD Lasso, SQRT Lasso, q Lasso, and Dantzig selector) and their extensions to sparse precision matrix estimation (TIGER and CLIME). These methods exploit different nonsmooth loss functions to gain modeling flexibility, estimation robustness,(More)
The cyclic block coordinate descent-type (CBCD-type) methods have shown remarkable computational performance for solving strongly convex minimization problems. Typical applications includes many popular statistical machine learning methods such as elastic-net regression, ridge penalized logistic regression, and sparse additive regression. Existing(More)
Nesterov's smoothing technique has been widely applied to solve non-smooth optimization problems involving high dimensional statistical models. However, existing theory focuses more on its computational properties rather than statistical properties. This paper bridges this gap by studying a family of regularized Dantzig-type estimators. For these(More)
Many statistical machine learning techniques sacrifice convenient computational structures to gain estimation robustness and modeling flexibility. In this paper, we study this fundamental tradeoff through a SQRT-Lasso problem for sparse linear regression and sparse precision matrix estimation in high dimensions. We explain how novel optimization techniques(More)