High‐dimensional quantile regression: Convolution smoothing and concave regularization

@article{Tan2021HighdimensionalQR,
  title={High‐dimensional quantile regression: Convolution smoothing and concave regularization},
  author={Kean Ming Tan and Lan Wang and Wen‐Xin Zhou},
  journal={Journal of the Royal Statistical Society: Series B (Statistical Methodology)},
  year={2021}
}
  • Kean Ming Tan, Lan Wang, Wen‐Xin Zhou
  • Published 12 September 2021
  • Mathematics
  • Journal of the Royal Statistical Society: Series B (Statistical Methodology)
`1-penalized quantile regression is widely used for analyzing high-dimensional data with heterogeneity. It is now recognized that the `1-penalty introduces non-negligible estimation bias, while a proper use of concave regularization may lead to estimators with refined convergence rates and oracle properties as the signal strengthens. Although folded concave penalized M-estimation with strongly convex loss functions have been well studied, the extant literature on quantile regression is… 
2 Citations

Figures and Tables from this paper

Communication-Efficient Distributed Quantile Regression with Optimal Statistical Guarantees
  • H. Battey, Kean Ming Tan, Wen-Xin Zhou
  • Mathematics
  • 2021
We address the problem of how to achieve optimal inference in distributed quantile regression without stringent scaling conditions. This is challenging due to the nonsmooth nature of the quantile
yaglm: a Python package for fitting and tuning generalized linear models that supports structured, adaptive and non-convex penalties
The yaglm package aims to make the broader ecosystem of modern generalized linear models accessible to data analysts and researchers. This ecosystem encompasses a range of loss functions (e.g.

References

SHOWING 1-10 OF 63 REFERENCES
Smoothed quantile regression with large-scale inference
Quantile regression is a powerful tool for learning the relationship between a scalar response and a multivariate predictor in the presence of heavier tails and/or data heterogeneity. In the present
GLOBALLY ADAPTIVE QUANTILE REGRESSION WITH ULTRA-HIGH DIMENSIONAL DATA.
TLDR
This article proposes a new penalization framework for quantile regression in the high dimensional setting, employing adaptive L1 penalties, and proposes a uniform selector of the tuning parameter for a set of quantile levels to avoid some of the potential problems with model selection at individual quantiles levels.
Smoothing Quantile Regressions
Abstract We propose to smooth the objective function, rather than only the indicator on the check function, in a linear quantile regression context. Not only does the resulting smoothed quantile
ADMM for High-Dimensional Sparse Penalized Quantile Regression
TLDR
This work introduces fast alternating direction method of multipliers (ADMM) algorithms for computing the sparse penalized quantile regression and demonstrates the competitive performance of this algorithm: it significantly outperforms several other fast solvers for high-dimensional penalizedquantile regression.
High-Dimensional Structured Quantile Regression
TLDR
This work considers the problem of linear quantile regression in high dimensions where the number of predictor variables is much higher than the numberOf samples available for parameter estimation and assumes the true parameter to have some structure characterized as having a small value according to some atomic norm R(·).
High Dimensional Latent Panel Quantile Regression with an Application to Asset Pricing
We propose a generalization of the linear panel quantile regression model to accommodate both \textit{sparse} and \textit{dense} parts: sparse means while the number of covariates available is large,
Penalized Composite Quasi-Likelihood for Ultrahigh-Dimensional Variable Selection.
TLDR
A data-driven weighted linear combination of convex loss functions, together with weighted L(1)-penalty is proposed, which possesses both the model selection consistency and estimation efficiency for the true non-zero coefficients.
Statistical consistency and asymptotic normality for high-dimensional robust M-estimators
TLDR
This work establishes a form of local statistical consistency for the penalized regression estimators under fairly mild conditions on the error distribution, and analysis of the local curvature of the loss function has useful consequences for optimization when the robust regression function and/or regularizer is nonconvex and the objective function possesses stationary points outside the local region.
Quantile Regression for Analyzing Heterogeneity in Ultra-High Dimension
TLDR
A novel, sufficient optimality condition that relies on a convex differencing representation of the penalized loss function and the subdifferential calculus is introduced that enables the oracle property for sparse quantile regression in the ultra-high dimension under relaxed conditions.
L1-Penalised quantile regression in high-dimensional sparse models
...
1
2
3
4
5
...