Robust linear regression: optimal rates in polynomial time

@article{Bakshi2021RobustLR,
  title={Robust linear regression: optimal rates in polynomial time},
  author={Ainesh Bakshi and Adarsh Prasad},
  journal={Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing},
  year={2021}
}
  • Ainesh Bakshi, A. Prasad
  • Published 29 June 2020
  • Mathematics, Computer Science
  • Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing
We obtain robust and computationally efficient estimators for learning several linear models that achieve statistically optimal convergence rate under minimal distributional assumptions. Concretely, we assume our data is drawn from a k-hypercontractive distribution and an є-fraction is adversarially corrupted. We then describe an estimator that converges to the optimal least-squares minimizer for the true distribution at a rate proportional to є2−2/k, when the noise is independent of the… 

Tables from this paper

Robust Regression Revisited: Acceleration and Improved Estimation Rates
TLDR
An identifiability proof introduced in the context of the sum-of-squares algorithm of [BP21], which achieved optimal error rates while requiring large polynomial runtime and sample complexity is reinterpreted within the Sever framework and obtained a dramatically faster and more sample-efficient algorithm under fewer distributional assumptions.
Robust regression with covariate filtering: Heavy tails and adversarial contamination
TLDR
This work shows how to modify the Huber regression, least trimmed squares, and least absolute deviation estimators to obtain estimators which are simultaneously computationally and statistically efficient in the stronger contamination model.
Robust and Sparse Estimation of Linear Regression Coefficients with Heavy-tailed Noises and Covariates
Robust and sparse estimation of linear regression coefficients is investigated. The situation addressed by the present paper is that covariates and noises are sampled from heavy-tailed distributions,
Trimmed Maximum Likelihood Estimation for Robust Learning in Generalized Linear Models
TLDR
Under label corruptions, a classical heuristic called the iterative trimmed maximum likelihood estimator is proved to achieve minimax near-optimal risk on a wide range of generalized linear models, including Gaussian regression, Poisson regression and Binomial regression.
Robust Sparse Mean Estimation via Sum of Squares
TLDR
This work develops the first efficient algorithms for robust sparse mean estimation without a priori knowledge of the covariance for subgaussian distributions on R with “certifiably bounded” t-th moments and sufficiently light tails.
Provably Auditing Ordinary Least Squares in Low Dimensions
TLDR
Algorithms for provably estimating the minimum number of samples that need to be removed so that rerunning the analysis overturns the conclusion of Ordinary Least Squares linear regression are applied to the Boston Housing dataset.
Adversarial Robust and Sparse Estimation of Linear Regression Coefficient
TLDR
This work considers robust low rank matrix estimation as a trace regression when out- puts are contaminated by adversaries, and proposes a simple approach based on a combination of the Huber loss function and the nuclear norm penalization.
Polynomial-Time Sum-of-Squares Can Robustly Estimate Mean and Covariance of Gaussians Optimally
TLDR
This work revisits the problem of estimating the mean and covariance of an unknown d dimensional Gaussian distribution in the presence of an ε -fraction of adversarial outliers and gives a new, simple analysis of the same canonical sum-of-squares relaxation used in Kothari and Steurer (2017) and Bakshi and Kotharis (2020) and shows that their algorithm achieves the same error, sample complexity and running time guarantees.
Corruption-Robust Offline Reinforcement Learning
TLDR
It is shown that a worst-case Ω( Hdε) optimality gap is unavoidable in linear MDP of dimension d, even if the adversary only corrupts the reward element in a tuple, and implies that corruption-robust offline RL is a strictly harder problem.
Online and Distribution-Free Robustness: Regression and Contextual Bandits with Huber Contamination
TLDR
This work revisit two classic high-dimensional online learning problems, namely linear regression and contextual bandits, from the perspective of adversarial robustness, based on a novel alternating minimization scheme that interleaves ordinary least-squares with a simple convex program that finds the optimal reweighting of the distribution under a spectral constraint.
...
...

References

SHOWING 1-10 OF 65 REFERENCES
Robust estimation via generalized quasi-gradients
TLDR
It is shown that generalized quasi-gradients exist and construct efficient algorithms, and these algorithms are simpler than previous ones in the literature, and for linear regression they are improved to improve the estimation error from O(\sqrt{\epsilon}) to the optimal rate.
Efficient Algorithms for Outlier-Robust Regression
TLDR
This work gives the first polynomial-time algorithm for performing linear orPolynomial regression resilient to adversarial corruptions in both examples and labels and gives a simple statistical lower bound showing that some distributional assumption is necessary to succeed in this setting.
Robust Estimation via Robust Gradient Estimation
TLDR
The workhorse is a novel robust variant of gradient descent, and the conditions under which this gradient descent variant provides accurate estimators in a general convex risk minimization problem are provided.
Robustly Learning any Clusterable Mixture of Gaussians
TLDR
A new robust identifiability proof of clusters from a Gaussian mixture, which can be captured by the constant-degree Sum of Squares proof system, and a novel use of SoS-certifiable anti-concentration and a new characterization of pairs of Gaussians with small (dimension-independent) overlap in terms of their parameter distance.
Robust Estimation of a Location Parameter
This paper contains a new approach toward a theory of robust estimation; it treats in detail the asymptotic theory of estimating a location parameter for contaminated normal distributions, and
Outlier-Robust Clustering of Non-Spherical Mixtures
TLDR
The techniques expand the sum-of-squares toolkit to show robust certifiability of TV-separated Gaussian clusters in data and extend to clustering mixtures of arbitrary affine transforms of the uniform distribution on the d-dimensional unit sphere.
List Decodable Mean Estimation in Nearly Linear Time
TLDR
This paper considers robust statistics in the presence of overwhelming outliers where the majority of the dataset is introduced adversarially and develops an algorithm for list decodable mean estimation in the same setting achieving up to constants the information theoretically optimal recovery, optimal sample complexity, and in nearly linear time up to polylogarithmic factors in dimension.
List-Decodable Subspace Recovery via Sum-of-Squares
TLDR
A new method is given that allows error reduction "within SoS" with only a logarithmic cost in the exponent in the running time (in contrast to polynomial cost in [KKK'19, RY'20].
List Decodable Subspace Recovery
TLDR
This work provides the first polynomial time algorithm for the 'list decodable subspace recovery' problem, and subsume it under a more general framework of list decoding over distributions that are "certifiably resilient" capturing state of the art results for list decodables mean estimation and regression.
Algorithms for heavy-tailed statistics: regression, covariance estimation, and beyond
TLDR
This work narrows the gap between the Gaussian and heavy-tailed settings for polynomial-time estimators, introduces new techniques to high-probability estimation, and suggests numerous new algorithmic questions in the following vein.
...
...