Corpus ID: 237491706

High-Dimensional Quantile Regression: Convolution Smoothing and Concave Regularization

@inproceedings{Tan2021HighDimensionalQR,
  title={High-Dimensional Quantile Regression: Convolution Smoothing and Concave Regularization},
  author={Kean Ming Tan and Lan Wang and Wen-Xin Zhou},
  year={2021}
}
  • Kean Ming Tan, Lan Wang, Wen-Xin Zhou
  • Published 12 September 2021
  • Mathematics
`1-penalized quantile regression is widely used for analyzing high-dimensional data with heterogeneity. It is now recognized that the `1-penalty introduces non-negligible estimation bias, while a proper use of concave regularization may lead to estimators with refined convergence rates and oracle properties as the signal strengthens. Although folded concave penalized M-estimation with strongly convex loss functions have been well studied, the extant literature on quantile regression is… Expand
2 Citations

Figures and Tables from this paper

Communication-Efficient Distributed Quantile Regression with Optimal Statistical Guarantees
  • H. Battey, Kean Ming Tan, Wen-Xin Zhou
  • Mathematics
  • 2021
We address the problem of how to achieve optimal inference in distributed quantile regression without stringent scaling conditions. This is challenging due to the nonsmooth nature of the quantileExpand
yaglm: a Python package for fitting and tuning generalized linear models that supports structured, adaptive and non-convex penalties
The yaglm package aims to make the broader ecosystem of modern generalized linear models accessible to data analysts and researchers. This ecosystem encompasses a range of loss functions (e.g.Expand

References

SHOWING 1-10 OF 43 REFERENCES
Smoothed quantile regression with large-scale inference
Quantile regression is a powerful tool for learning the relationship between a scalar response and a multivariate predictor in the presence of heavier tails and/or data heterogeneity. In the presentExpand
GLOBALLY ADAPTIVE QUANTILE REGRESSION WITH ULTRA-HIGH DIMENSIONAL DATA.
TLDR
This article proposes a new penalization framework for quantile regression in the high dimensional setting, employing adaptive L1 penalties, and proposes a uniform selector of the tuning parameter for a set of quantile levels to avoid some of the potential problems with model selection at individual quantiles levels. Expand
Smoothing Quantile Regressions
  • Marcelo Fernandes, E. Guerre, E. Horta
  • Economics, Mathematics
  • Journal of Business & Economic Statistics
  • 2019
Abstract We propose to smooth the objective function, rather than only the indicator on the check function, in a linear quantile regression context. Not only does the resulting smoothed quantileExpand
ADMM for High-Dimensional Sparse Penalized Quantile Regression
TLDR
This work introduces fast alternating direction method of multipliers (ADMM) algorithms for computing the sparse penalized quantile regression and demonstrates the competitive performance of this algorithm: it significantly outperforms several other fast solvers for high-dimensional penalizedquantile regression. Expand
High Dimensional Latent Panel Quantile Regression with an Application to Asset Pricing
We propose a generalization of the linear panel quantile regression model to accommodate both \textit{sparse} and \textit{dense} parts: sparse means while the number of covariates available is large,Expand
Penalized Composite Quasi-Likelihood for Ultrahigh-Dimensional Variable Selection.
TLDR
A data-driven weighted linear combination of convex loss functions, together with weighted L(1)-penalty is proposed, which possesses both the model selection consistency and estimation efficiency for the true non-zero coefficients. Expand
Statistical consistency and asymptotic normality for high-dimensional robust M-estimators
TLDR
This work establishes a form of local statistical consistency for the penalized regression estimators under fairly mild conditions on the error distribution, and analysis of the local curvature of the loss function has useful consequences for optimization when the robust regression function and/or regularizer is nonconvex and the objective function possesses stationary points outside the local region. Expand
Quantile Regression for Analyzing Heterogeneity in Ultra-High Dimension
TLDR
A novel, sufficient optimality condition that relies on a convex differencing representation of the penalized loss function and the subdifferential calculus is introduced that enables the oracle property for sparse quantile regression in the ultra-high dimension under relaxed conditions. Expand
STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.
TLDR
A unified theory is provided to show explicitly how to obtain the oracle solution via the local linear approximation algorithm for a folded concave penalized estimation problem, and it is shown that as long as the problem is localizable and the oracles estimator is well behaved, it can be obtained by using the one-step locallinear approximation. Expand
...
1
2
3
4
5
...