Non-convex penalized multitask regression using data depth-based penalties

@inproceedings{Majumdar2018NonconvexPM,
  title={Non-convex penalized multitask regression using data depth-based penalties},
  author={S. Majumdar and Snigdhansu Chatterjee},
  year={2018}
}
We propose a new class of nonconvex penalty functions, based on data depth functions, for multitask sparse penalized regression. These penalties quantify the relative position of rows of the coefficient matrix from a fixed distribution centered at the origin. We derive the theoretical properties of an approximate one-step sparse estimator of the coefficient matrix using local linear approximation of the penalty function, and provide algorithm for its computation. For orthogonal design and… Expand
Ultrahigh-Dimensional Robust and Efficient Sparse Regression Using Non-Concave Penalized Density Power Divergence
TLDR
A sparse regression method based on the non-concave penalized density power divergence loss function which is robust against infinitesimal contamination in very high dimensionality and possesses large-sample oracle properties in an ultrahigh-dimensional regime is proposed. Expand
On Weighted Multivariate Sign Functions
TLDR
The scope of using robust multivariate methods is extended to include robust sufficient dimension reduction and functional outlier detection and methods of robust location estimation and robust principal component analysis are demonstrated. Expand
Joint Estimation and Inference for Data Integration Problems based on Multiple Multi-layered Gaussian Graphical Models
TLDR
This work proposes a general statistical framework based on Gaussian graphical models for horizontal and vertical integration of information in such datasets, and develops a debiasing technique and asymptotic distributions of inter-layer directed edge weights that utilize already computed neighborhood selection coefficients for nodes in the upper layer. Expand
On Estimation and Inference in Latent Structure Random Graphs
We define a latent structure model (LSM) random graph as a random dot product graph (RDPG) in which the latent position distribution incorporates both probabilistic and geometric constraints,Expand
D ec 2 01 8 On general notions of depth for regression
Depth notions in location have fascinated tremendous attention in the literature. In fact, data depth and its applications remain one of the most active research topics in statistics in the last twoExpand
On General Notions of Depth for Regression
Depth notions in location have fascinated tremendous attention in the literature. In fact data depth and its applications remain one of the most active research topics in statistics in the last twoExpand

References

SHOWING 1-10 OF 42 REFERENCES
Regularized M-estimators with nonconvexity: statistical and algorithmic theory for local optima
TLDR
Under restricted strong convexity on the loss and suitable regularity conditions on the penalty, it is proved that any stationary point of the composite objective function will lie within statistical precision of the underlying parameter vector. Expand
A Sparse-Group Lasso
For high-dimensional supervised learning problems, often using problem-specific assumptions can lead to greater accuracy. For problems with grouped covariates, which are believed to have sparseExpand
CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION.
TLDR
It is proved that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one, and a high-dimensional BIC criterion is proposed and shown that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracles estimator. Expand
One-step Sparse Estimates in Nonconcave Penalized Likelihood Models.
Fan & Li (2001) propose a family of variable selection methods via penalized likelihood using concave penalty functions. The nonconcave penalized likelihood estimators enjoy the oracle properties,Expand
Multi-level Lasso for Sparse Multi-task Regression
TLDR
The approach is based on an intuitive decomposition of the regression coe_cients into a product between a component that is common to all tasks and another component that captures task-specificity that yields the Multi-level Lasso objective. Expand
Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
Variable selection is fundamental to high-dimensional statistical modeling, including nonparametric regression. Many approaches in use are stepwise selection procedures, which can be computationallyExpand
Adaptive Multi-task Sparse Learning with an Application to fMRI Study
TLDR
This paper establishes the asymptotic oracle property for the proposed adaptive multi-task sparse learning methods including both adaptive multitask lasso and elastic-net, and shows by simulations that adaptive sparselearning methods achieve superior performance and provide some insights into how the brain represents meanings of words. Expand
Nearly unbiased variable selection under minimax concave penalty
We propose MC+, a fast, continuous, nearly unbiased and accurate method of penalized variable selection in high-dimensional linear regression. The LASSO is fast and continuous, but biased. The biasExpand
SUPPORT UNION RECOVERY IN HIGH-DIMENSIONAL MULTIVARIATE REGRESSION
multivariate group Lasso, in which block regularization based on the ‘1/‘2 norm is used for support union recovery, or recovery of the set of s rows for which B is non-zero. Under high-dimensionalExpand
A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers
TLDR
A unified framework for establishing consistency and convergence rates for regularized M-estimators under high-dimensional scaling is provided and one main theorem is state and shown how it can be used to re-derive several existing results, and also to obtain several new results. Expand
...
1
2
3
4
5
...