• Corpus ID: 219177383

Universal Robust Regression via Maximum Mean Discrepancy

@article{Alquier2020UniversalRR,
  title={Universal Robust Regression via Maximum Mean Discrepancy},
  author={Pierre Alquier and Mathieu Gerber},
  journal={arXiv: Statistics Theory},
  year={2020}
}
Many datasets are collected automatically, and are thus easily contaminated by outliers. In order to overcome this issue, there was recently a regain of interest in robust estimation methods. However, most of these methods are designed for a specific purpose, such as estimation of the mean, or linear regression. We propose estimators based on Maximum Mean Discrepancy (MMD) optimization as a universal framework for robust regression. We provide non-asymptotic error bounds, and show that our… 
3 Citations

Tables from this paper

Estimation of copulas via Maximum Mean Discrepancy
TLDR
This paper proposes to use a procedure based on the Maximum Mean Discrepancy (MMD) principle, and derives non-asymptotic oracle inequalities, consistency and asymptotic normality of this new estimator for parametric copula models.
Robust Bayesian Inference for Simulator-based Models via the MMD Posterior Bootstrap
TLDR
This paper proposes a novel algorithm based on the posterior bootstrap and maximum mean discrepancy estimators that leads to a highly-parallelisable Bayesian inference algorithm with strong robustness properties for simulators.
Discrepancy-based Inference for Intractable Generative Models using Quasi-Monte Carlo
TLDR
The key results are sample complexity bounds which demonstrate that, under smoothness conditions on the generator, QMC can significantly reduce the number of samples required to obtain a given level of accuracy when using three of the most common discrepancies: the maximum mean discrepancy, the Wasserstein distance, and the Sinkhorn divergence.

References

SHOWING 1-10 OF 97 REFERENCES
An Introduction to the Theory of Reproducing Kernel Hilbert Spaces
Reproducing kernel Hilbert spaces have developed into an important tool in many areas, especially statistics and machine learning, and they play a valuable role in complex analysis, probability,
Measure Theory
These are some brief notes on measure theory, concentrating on Lebesgue measure on Rn. Some missing topics I would have liked to have included had time permitted are: the change of variable formula
Dimensionality Reduction for Supervised Learning with Reproducing Kernel Hilbert Spaces
TLDR
This work treats the problem of dimensionality reduction as that of finding a low-dimensional “effective subspace” of X which retains the statistical relationship between X and Y and establishes a general nonparametric characterization of conditional independence using covariance operators on a reproducing kernel Hilbert space.
Gaussian Processes and Kernel Methods: A Review on Connections and Equivalences
This paper is an attempt to bridge the conceptual gaps between researchers working on the two widely used approaches based on positive definite kernels: Bayesian learning or inference using Gaussian
Characteristic and Universal Tensor Product Kernels
TLDR
This paper answers questions about when HSIC characterizes independence and when MMD with tensor product kernel can discriminate probability distributions by studying various notions of characteristic property of the tensorProduct kernel.
Hilbert space embeddings of conditional distributions with applications to dynamical systems
TLDR
This paper derives a kernel estimate for the conditional embedding, and shows its connection to ordinary embeddings, and aims to derive a nonparametric method for modeling dynamical systems where the belief state of the system is maintained as a conditional embeddedding.
Finite sample properties of parametric MMD estimation: robustness to misspecification and dependence
TLDR
This paper tackles the problem of universal estimation using a minimum distance estimator presented in Briol et al. (2019) based on the Maximum Mean Discrepancy, and shows that the estimator is robust to both dependence and to the presence of outliers in the dataset.
All-in-one robust estimator of the Gaussian mean
TLDR
It is shown that a single robust estimator of the mean of a multivariate Gaussian distribution can enjoy five desirable properties and can be extended to sub-Gaussian distributions, as well as to the cases of unknown rate of contamination or unknown covariance matrix.
Robust sub-Gaussian estimation of a mean vector in nearly linear time
TLDR
The algorithm is fully data-dependent and does not use in its construction the proportion of outliers nor the rate above, which combines recently developed tools for Median-of-Means estimators and covering-Semi-definite Programming.
A Global Stochastic Optimization Particle Filter Algorithm
TLDR
G-PFSO is introduced, a new algorithm for expected log-likelihood maximization in situations where the objective function is multi-modal and/or has saddle points, that is effectively estimated by means of a standard particle filter algorithm.
...
1
2
3
4
5
...