Adaptive Huber Regression

@article{Sun2017AdaptiveHR,
  title={Adaptive Huber Regression},
  author={Qiang Sun and Wen-Xin Zhou and Jianqing Fan},
  journal={Journal of the American Statistical Association},
  year={2017},
  volume={115},
  pages={254 - 265}
}
Abstract Big data can easily be contaminated by outliers or contain variables with heavy-tailed distributions, which makes many conventional methods inadequate. To address this challenge, we propose the adaptive Huber regression for robust estimation and inference. The key observation is that the robustification parameter should adapt to the sample size, dimension and moments for optimal tradeoff between bias and robustness. Our theoretical framework deals with heavy-tailed distributions with… 

Adaptive Huber Regression on Markov-dependent Data.

A pr 2 01 9 Adaptive Huber Regression on Markov-dependent Data

The results show that the Markov dependence impacts on the adaption of the robustification parameter and the estimation of regression coefficients in the way that the sample size should be discounted by a factor depending on the spectral gap of the underlying Markov chain.

Support estimation in high-dimensional heteroscedastic mean regression

This paper considers a linear mean regression model with random design and potentially heteroscedastic, heavy-tailed errors, and investigates support estimation in this framework, using a strictly convex, smooth variant of the Huber loss function with tuning parameter depending on the parameters of the problem, as well as the adaptive LASSO penalty for computational efficiency.

User-Friendly Covariance Estimation for Heavy-Tailed Distributions

This work introduces element-wise and spectrum-wise truncation operators, as well as their $M$-estimator counterparts, to robustify the sample covariance matrix and proposes tuning-free procedures that automatically calibrate the tuning parameters.

Robust Fused Lasso Penalized Huber Regression with Nonasymptotic Property and Implementation Studies

An adaptive Huber regression for robust estimation and inference is proposed, in which, the fused lasso penalty is used to encourage the sparsity of the coefficients as well as theSparsity of their differences, i.e., local constancy of the Coe-Sciencen profile.

Robust Variable Selection and Estimation Via Adaptive Elastic Net S-Estimators for Linear Regression

Heavy-tailed error distributions and predictors with anomalous values are ubiquitous in high-dimensional regression problems and can seriously jeopardize the validity of statistical analyses if not

Tuning-Free Huber Regression : A Non-asymptotic Perspective of Robustness

A new data-driven tuning scheme to choose the robustification parameter for Huber-type sub-Gaussian estimators in three fundamental problems: mean estimation, linear regression and sparse regression in high dimensions is proposed.

Robust regression with covariate filtering: Heavy tails and adversarial contamination

This work shows how to modify the Huber regression, least trimmed squares, and least absolute deviation estimators to obtain estimators which are simultaneously computationally and statistically efficient in the stronger contamination model.

Robust High-dimensional Tuning Free Multiple Testing

This study develops Berry-Esseen inequality and Cram´er type moderate deviation for the HL estimator based on newly developed non-asymptotic Bahadur representation, and builds data-driven confidence intervals via a weighted bootstrap approach and convincingly shown that the resulting tuning-free and moment-free methods control false discovery proportion at a prescribed level.
...

References

SHOWING 1-10 OF 76 REFERENCES

ADAPTIVE ROBUST VARIABLE SELECTION.

The theoretical results also reveal that adaptive choice of the weight vector is essential for the WR-Lasso to enjoy these nice asymptotic properties, and a two-step procedure is proposed, called adaptive robust Lasso (AR- Lasso), which is justified theoretically to possess the oracle property and the asymPTotic normality.

Estimation of high dimensional mean regression in the absence of symmetry and light tail assumptions

A penalized Huber loss with diverging parameter to reduce biases created by the traditional Huer loss is proposed and a penalized robust approximate (RA) quadratic loss is called the RA lasso, which is compared with other regularized robust estimators based on quantile regression and least absolute deviation regression.

A NEW PERSPECTIVE ON ROBUST M-ESTIMATION: FINITE SAMPLE THEORY AND APPLICATIONS TO DEPENDENCE-ADJUSTED MULTIPLE TESTING.

This paper develops nonasymptotic concentration results for such an adaptive Huber estimator with the tuning parameter adapted to sample size, dimension, and the variance of the noise and shows that the robust dependence-adjusted procedure asymptotically controls the overall false discovery proportion at the nominal level under mild moment conditions.

GLOBALLY ADAPTIVE QUANTILE REGRESSION WITH ULTRA-HIGH DIMENSIONAL DATA.

This article proposes a new penalization framework for quantile regression in the high dimensional setting, employing adaptive L1 penalties, and proposes a uniform selector of the tuning parameter for a set of quantile levels to avoid some of the potential problems with model selection at individual quantiles levels.

L1-Penalized Quantile Regression in High Dimensional Sparse Models

This work proposes a pivotal, data-driven choice of the regularization parameter and shows that it satisfies certain theoretical constraints and evaluates the performance of L1-QR in a Monte-Carlo experiment, and provides an application to the analysis of the international economic growth.

Statistical Learning with Sparsity: The Lasso and Generalizations

Statistical Learning with Sparsity: The Lasso and Generalizations presents methods that exploit sparsity to help recover the underlying signal in a set of data and extract useful and reproducible patterns from big datasets.

Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties

In this article, penalized likelihood approaches are proposed to handle variable selection problems, and it is shown that the newly proposed estimators perform as well as the oracle procedure in variable selection; namely, they work as well if the correct submodel were known.

Sub-Gaussian estimators of the mean of a random matrix with heavy-tailed entries

Estimation of the covariance matrix has attracted a lot of attention of the statistical research community over the years, partially due to important applications such as Principal Component

Parametric estimation. Finite sample theory

The paper aims at reconsidering the famous Le Cam LAN theory. The main features of the approach which make it different from the classical one are: (1) the study is non-asymptotic, that is, the
...