# Non-convex penalized multitask regression using data depth-based penalties

@inproceedings{Majumdar2018NonconvexPM, title={Non-convex penalized multitask regression using data depth-based penalties}, author={S. Majumdar and Snigdhansu Chatterjee}, year={2018} }

We propose a new class of nonconvex penalty functions, based on data depth functions, for multitask sparse penalized regression. These penalties quantify the relative position of rows of the coefficient matrix from a fixed distribution centered at the origin. We derive the theoretical properties of an approximate one-step sparse estimator of the coefficient matrix using local linear approximation of the penalty function, and provide algorithm for its computation. For orthogonal design and… Expand

#### Figures, Tables, and Topics from this paper

#### 6 Citations

Ultrahigh-Dimensional Robust and Efficient Sparse Regression Using Non-Concave Penalized Density Power Divergence

- Mathematics, Computer Science
- IEEE Transactions on Information Theory
- 2020

A sparse regression method based on the non-concave penalized density power divergence loss function which is robust against infinitesimal contamination in very high dimensionality and possesses large-sample oracle properties in an ultrahigh-dimensional regime is proposed. Expand

On Weighted Multivariate Sign Functions

- Computer Science, Mathematics
- 2019

The scope of using robust multivariate methods is extended to include robust sufficient dimension reduction and functional outlier detection and methods of robust location estimation and robust principal component analysis are demonstrated. Expand

Joint Estimation and Inference for Data Integration Problems based on Multiple Multi-layered Gaussian Graphical Models

- Computer Science, Mathematics
- 2018

This work proposes a general statistical framework based on Gaussian graphical models for horizontal and vertical integration of information in such datasets, and develops a debiasing technique and asymptotic distributions of inter-layer directed edge weights that utilize already computed neighborhood selection coefficients for nodes in the upper layer. Expand

On Estimation and Inference in Latent Structure Random Graphs

- Mathematics
- 2018

We define a latent structure model (LSM) random graph as a random dot product graph (RDPG) in which the latent position distribution incorporates both probabilistic and geometric constraints,… Expand

D ec 2 01 8 On general notions of depth for regression

- 2018

Depth notions in location have fascinated tremendous attention in the literature. In fact, data depth and its applications remain one of the most active research topics in statistics in the last two… Expand

On General Notions of Depth for Regression

- Mathematics
- 2018

Depth notions in location have fascinated tremendous attention in the literature. In fact data depth and its applications remain one of the most active research topics in statistics in the last two… Expand

#### References

SHOWING 1-10 OF 42 REFERENCES

Regularized M-estimators with nonconvexity: statistical and algorithmic theory for local optima

- Computer Science, Mathematics
- J. Mach. Learn. Res.
- 2013

Under restricted strong convexity on the loss and suitable regularity conditions on the penalty, it is proved that any stationary point of the composite objective function will lie within statistical precision of the underlying parameter vector. Expand

A Sparse-Group Lasso

- Mathematics
- 2013

For high-dimensional supervised learning problems, often using problem-specific assumptions can lead to greater accuracy. For problems with grouped covariates, which are believed to have sparse… Expand

CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION.

- Mathematics, Medicine
- Annals of statistics
- 2013

It is proved that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one, and a high-dimensional BIC criterion is proposed and shown that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracles estimator. Expand

One-step Sparse Estimates in Nonconcave Penalized Likelihood Models.

- Mathematics
- 2008

Fan & Li (2001) propose a family of variable selection methods via penalized likelihood using concave penalty functions. The nonconcave penalized likelihood estimators enjoy the oracle properties,… Expand

Multi-level Lasso for Sparse Multi-task Regression

- Computer Science
- ICML
- 2012

The approach is based on an intuitive decomposition of the regression coe_cients into a product between a component that is common to all tasks and another component that captures task-specificity that yields the Multi-level Lasso objective. Expand

Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties

- Mathematics
- 2001

Variable selection is fundamental to high-dimensional statistical modeling, including nonparametric regression. Many approaches in use are stepwise selection procedures, which can be computationally… Expand

Adaptive Multi-task Sparse Learning with an Application to fMRI Study

- Computer Science
- SDM
- 2012

This paper establishes the asymptotic oracle property for the proposed adaptive multi-task sparse learning methods including both adaptive multitask lasso and elastic-net, and shows by simulations that adaptive sparselearning methods achieve superior performance and provide some insights into how the brain represents meanings of words. Expand

Nearly unbiased variable selection under minimax concave penalty

- Mathematics
- 2010

We propose MC+, a fast, continuous, nearly unbiased and accurate method of penalized variable selection in high-dimensional linear regression. The LASSO is fast and continuous, but biased. The bias… Expand

SUPPORT UNION RECOVERY IN HIGH-DIMENSIONAL MULTIVARIATE REGRESSION

- Mathematics
- 2011

multivariate group Lasso, in which block regularization based on the ‘1/‘2 norm is used for support union recovery, or recovery of the set of s rows for which B is non-zero. Under high-dimensional… Expand

A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers

- Computer Science, Mathematics
- NIPS
- 2009

A unified framework for establishing consistency and convergence rates for regularized M-estimators under high-dimensional scaling is provided and one main theorem is state and shown how it can be used to re-derive several existing results, and also to obtain several new results. Expand