# High-Dimensional Quantile Regression: Convolution Smoothing and Concave Regularization

@inproceedings{Tan2021HighDimensionalQR, title={High-Dimensional Quantile Regression: Convolution Smoothing and Concave Regularization}, author={Kean Ming Tan and Lan Wang and Wen-Xin Zhou}, year={2021} }

`1-penalized quantile regression is widely used for analyzing high-dimensional data with heterogeneity. It is now recognized that the `1-penalty introduces non-negligible estimation bias, while a proper use of concave regularization may lead to estimators with refined convergence rates and oracle properties as the signal strengthens. Although folded concave penalized M-estimation with strongly convex loss functions have been well studied, the extant literature on quantile regression is… Expand

#### 2 Citations

Communication-Efficient Distributed Quantile Regression with Optimal Statistical Guarantees

- Mathematics
- 2021

We address the problem of how to achieve optimal inference in distributed quantile regression without stringent scaling conditions. This is challenging due to the nonsmooth nature of the quantile… Expand

yaglm: a Python package for fitting and tuning generalized linear models that supports structured, adaptive and non-convex penalties

- Mathematics
- 2021

The yaglm package aims to make the broader ecosystem of modern generalized linear models accessible to data analysts and researchers. This ecosystem encompasses a range of loss functions (e.g.… Expand

#### References

SHOWING 1-10 OF 43 REFERENCES

Smoothed quantile regression with large-scale inference

- Mathematics
- 2020

Quantile regression is a powerful tool for learning the relationship between a scalar response and a multivariate predictor in the presence of heavier tails and/or data heterogeneity. In the present… Expand

GLOBALLY ADAPTIVE QUANTILE REGRESSION WITH ULTRA-HIGH DIMENSIONAL DATA.

- Mathematics, Computer Science
- Annals of statistics
- 2015

This article proposes a new penalization framework for quantile regression in the high dimensional setting, employing adaptive L1 penalties, and proposes a uniform selector of the tuning parameter for a set of quantile levels to avoid some of the potential problems with model selection at individual quantiles levels. Expand

Smoothing Quantile Regressions

- Economics, Mathematics
- Journal of Business & Economic Statistics
- 2019

Abstract We propose to smooth the objective function, rather than only the indicator on the check function, in a linear quantile regression context. Not only does the resulting smoothed quantile… Expand

ADMM for High-Dimensional Sparse Penalized Quantile Regression

- Mathematics, Computer Science
- Technometrics
- 2018

This work introduces fast alternating direction method of multipliers (ADMM) algorithms for computing the sparse penalized quantile regression and demonstrates the competitive performance of this algorithm: it significantly outperforms several other fast solvers for high-dimensional penalizedquantile regression. Expand

High Dimensional Latent Panel Quantile Regression with an Application to Asset Pricing

- Mathematics, Economics
- 2019

We propose a generalization of the linear panel quantile regression model to accommodate both \textit{sparse} and \textit{dense} parts: sparse means while the number of covariates available is large,… Expand

Penalized Composite Quasi-Likelihood for Ultrahigh-Dimensional Variable Selection.

- Mathematics, Medicine
- Journal of the Royal Statistical Society. Series B, Statistical methodology
- 2011

A data-driven weighted linear combination of convex loss functions, together with weighted L(1)-penalty is proposed, which possesses both the model selection consistency and estimation efficiency for the true non-zero coefficients. Expand

Statistical consistency and asymptotic normality for high-dimensional robust M-estimators

- Mathematics, Computer Science
- ArXiv
- 2015

This work establishes a form of local statistical consistency for the penalized regression estimators under fairly mild conditions on the error distribution, and analysis of the local curvature of the loss function has useful consequences for optimization when the robust regression function and/or regularizer is nonconvex and the objective function possesses stationary points outside the local region. Expand

Quantile Regression for Analyzing Heterogeneity in Ultra-High Dimension

- Mathematics, Medicine
- Journal of the American Statistical Association
- 2012

A novel, sufficient optimality condition that relies on a convex differencing representation of the penalized loss function and the subdifferential calculus is introduced that enables the oracle property for sparse quantile regression in the ultra-high dimension under relaxed conditions. Expand

STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.

- Mathematics, Medicine
- Annals of statistics
- 2014

A unified theory is provided to show explicitly how to obtain the oracle solution via the local linear approximation algorithm for a folded concave penalized estimation problem, and it is shown that as long as the problem is localizable and the oracles estimator is well behaved, it can be obtained by using the one-step locallinear approximation. Expand