• Corpus ID: 174799385

Unbiased estimators for the variance of MMD estimators

@article{Sutherland2019UnbiasedEF,
  title={Unbiased estimators for the variance of MMD estimators},
  author={Danica J. Sutherland},
  journal={ArXiv},
  year={2019},
  volume={abs/1906.02104}
}
The maximum mean discrepancy (MMD) is a kernel-based distance between probability distributions useful in many applications (Gretton et al. 2012), bearing a simple estimator with pleasing computational and statistical properties. Being able to efficiently estimate the variance of this estimator is very helpful to various problems in two-sample testing. Towards this end, Bounliphone et al. (2016) used the theory of U-statistics to derive estimators for the variance of an MMD estimator, and… 
Maximum Mean Discrepancy is Aware of Adversarial Attacks
TLDR
It is validated that MMD is aware of adversarial attacks, which lights up a novel road for adversarial attack detection based on two-sample tests.
A Novel Non-parametric Two-Sample Test on Imprecise Observations
TLDR
A fuzzy-based maximum mean discrepancy (F-MMD) is proposed, a powerful two-sample test on imprecise observations that significantly outperforms competitive two- sample test methods when facing imprecising observations.
Maximum Mean Discrepancy Test is Aware of Adversarial Attacks
TLDR
It is verified that the MMD test is aware of adversarial attacks, which lights up a novel road for adversarial data detection based on two-sample tests.
A Kernel Two-Sample Test for Functional Data
We propose a nonparametric two-sample test procedure based on Maximum Mean Discrepancy (MMD) for testing the hypothesis that two samples of functions have the same underlying distribution, using a
Learning Deep Kernels for Non-Parametric Two-Sample Tests
TLDR
A class of kernel-based two-sample tests, which aim to determine whether two sets of samples are drawn from the same distribution, which applies both to kernels on deep features and to simpler radial basis kernels or multiple kernel learning.

References

SHOWING 1-5 OF 5 REFERENCES
A Kernel Two-Sample Test
TLDR
This work proposes a framework for analyzing and comparing distributions, which is used to construct statistical tests to determine if two samples are drawn from different distributions, and presents two distribution free tests based on large deviation bounds for the maximum mean discrepancy (MMD).
Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy
TLDR
This optimized MMD is applied to the setting of unsupervised learning by generative adversarial networks (GAN), in which a model attempts to generate realistic samples, and a discriminator attempts to tell these apart from data samples.
A Test of Relative Similarity For Model Selection in Generative Models
TLDR
A statistical test of relative similarity is introduced, which is used to determine which of two models generates samples that are significantly closer to a real-world reference dataset of interest.
Kernel Mean Embedding of Distributions: A Review and Beyonds
TLDR
A comprehensive review of existing work and recent advances in the Hilbert space embedding of distributions, and to discuss the most challenging issues and open problems that could lead to new research directions.
Approximation Theorems of Mathematical Statistics
Preliminary Tools and Foundations. The Basic Sample Statistics. Transformations of Given Statistics. Asymptotic Theory in Parametric Inference. U--Statistics. Von Mises Differentiable Statistical