• Corpus ID: 219176865

Estimating Principal Components under Adversarial Perturbations

@inproceedings{Awasthi2020EstimatingPC,
  title={Estimating Principal Components under Adversarial Perturbations},
  author={Pranjal Awasthi and Xue Chen and Aravindan Vijayaraghavan},
  booktitle={COLT},
  year={2020}
}
Robustness is a key requirement for widespread deployment of machine learning algorithms, and has received much attention in both statistics and computer science. We study a natural model of robustness for high-dimensional statistical estimation problems that we call the adversarial perturbation model. An adversary can perturb every sample arbitrarily up to a specified magnitude $\delta$ measured in some $\ell_q$ norm, say $\ell_\infty$. Our model is motivated by emerging paradigms such as low… 

Understanding Simultaneous Train and Test Robustness

This work shows that the two seemingly different notions of robustness at train-time and test-time are closely related, and this connection can be leveraged to develop algorithmic techniques that are applicable in both the settings.

Adversarially robust subspace learning in the spiked covariance model

This work derives the adversarial projection risk when data follows the multivariate Gaussian distribution with the spiked covariance, or so‐called the Spiked Covariance model, and finds an upper bound of the empirical risk to find the robust subspace for the general spike covariance model.

References

SHOWING 1-10 OF 73 REFERENCES

Squared-Norm Empirical Process in Banach Space

This note extends a recent result of Mendelson on the supremum of a quadratic process to squared norms of functions taking values in a Banach space. Our method of proof is a reduction by a

Adversarially Robust Low Dimensional Representations

This work forms a natural extension of Principal Component Analysis (PCA) where the goal is to find a low dimensional subspace to represent the given data with minimum projection error, and that is in addition robust to small perturbations measured in $\ell_q$ norm.

On Robustness to Adversarial Examples and Polynomial Optimization

The main contribution of this work is to exhibit a strong connection between achieving robustness to adversarial examples, and a rich class of polynomial optimization problems, thereby making progress on the above questions.

High Dimensional Probability

About forty years ago it was realized by several researchers that the essential features of certain objects of Probability theory, notably Gaussian processes and limit theorems, may be better

Sever: A Robust Meta-Algorithm for Stochastic Optimization

This work introduces a new meta-algorithm that can take in a base learner such as least squares or stochastic gradient descent, and harden the learner to be resistant to outliers, and finds that in both cases it has substantially greater robustness than several baselines.

Learning geometric concepts with nasty noise

The first polynomial-time PAC learning algorithms for low-degree PTFs and intersections of halfspaces with dimension-independent error guarantees in the presence of nasty noise under the Gaussian distribution are given.

Robustly Learning a Gaussian: Getting Optimal Error, Efficiently

This work gives robust estimators that achieve estimation error $O(\varepsilon)$ in the total variation distance, which is optimal up to a universal constant that is independent of the dimension.

Tighten after Relax: Minimax-Optimal Sparse PCA in Polynomial Time

This paper proposes a two-stage sparse PCA procedure that attains the optimal principal subspace estimator in polynomial time and motivates a general paradigm of tackling nonconvex statistical learning problems with provable statistical guarantees.

Complexity Theoretic Lower Bounds for Sparse Principal Component Detection

The performance of a test is measured by the smallest signal strength that it can detect and a computationally efficient method based on semidefinite programming is proposed and it is proved that the statistical performance of this test cannot be strictly improved by any computationallyefficient method.

Coloring Random and Semi-Random k-Colorable Graphs

Algorithms that color randomly generated k -colorable graphs for much lower edge densities than previous approaches are presented and it is shown that even for quite low noise rates, semi-random k - colorable graphs can be optimally colored with high probability.
...