• Corpus ID: 248572161

Statistical inference with regularized optimal transport

  title={Statistical inference with regularized optimal transport},
  author={Ziv Goldfeld and Kengo Kato and Gabriel Rioux and Ritwik Sadhu},
. Optimal transport (OT) is a versatile framework for comparing probability measures, with many applications to statistics, machine learning, and applied mathematics. However, OT distances suffer from computational and statistical scalability issues to high dimensions, which motivated the study of regularized OT methods like slicing, smoothing, and entropic penalty. This work establishes a unified framework for deriving limit distributions of empirical regularized OT distances, semiparametric e… 

Statistical, Robustness, and Computational Guarantees for Sliced Wasserstein Distances

This work quantifies sliced Wasserstein distances scalability from three key aspects: empirical convergence rates; robustness to data contamination; and efficient computational methods; and characterize minimax optimal, dimension-free robust estimation risks, and show an equivalence between robust 1-Wasserstein estimation and robust mean estimation.

Distributional Convergence of the Sliced Wasserstein Process

A framework is obtained in which to prove distributional limit results for all Wasserstein distances based on one-dimensional projections and illustrates these results on a number of examples where no distributional limits were previously known.

Wasserstein Distributionally Robust Optimization via Wasserstein Barycenters

This work proposes constructing the nominal distribution in optimal transport-based distributionally robust optimization problems through the notion of Wasserstein barycenter as an aggregation of data samples from multiple sources and demonstrates that the proposed scheme outperforms other widely used estimators in both the low- and high-dimensional regimes.

Wasserstein Distributionally Robust Optimization with Wasserstein Barycenters

This work proposes constructing the nominal distribution in optimal transport-based distributionally robust optimization problems through the notion of Wasserstein barycenter as an aggregation of data samples from multiple sources through which it outperforms other widely used estimators in both the low- and high-dimensional regimes.

Weak limits of entropy regularized Optimal Transport; potentials, plans and divergences

The central limit theorem of the Sinkhorn potentials and the weak limits of the couplings are obtained, proving a conjecture of Harchaoui, Liu and Pal (2020) and enabling statistical inference based on entropic regularized optimal transport.

Limit distribution theory for f-Divergences

A general methodology for deriving distributional limits for f -divergences based on the functional delta method and Hadamard directional differentiability is developed and an application of the limit distribution theory to auditing differential privacy is proposed and analyzed for significance level and power against local alternatives.

Central limit theorem for the Sliced 1-Wasserstein distance and the max-Sliced 1-Wasserstein distance

This paper utilizes the central limit theorem in Banach space to derive the limit distribution for the Sliced 1-Wasserstein distance and proves that the function class is P -Donsker under mild moment assumption and investigates how many random projections can make sure the error small in high probability.

Martingale Methods for Sequential Estimation of Convex Functionals and Divergences

An offline-to-sequential device is constructed that converts a wide array of existing offline concentration inequalities into time-uniform confidence sequences that can be continuously monitored, providing valid tests or confidence intervals at arbitrary stopping times.



Gaussian-Smoothed Optimal Transport: Metric Structure and Statistical Efficiency

This work proposes a novel Gaussian-smoothed OT (GOT) framework, that achieves the best of both worlds: preserving the 1-Wasserstein metric structure while alleviating the empirical approximation curse of dimensionality.

Smooth p-Wasserstein Distance: Structure, Empirical Approximation, and Statistical Applications

It is proved that W p enjoys a parametric empirical convergence rate of n−1/2, which contrasts the n−2/d rate for unsmoothed Wp when d ≥ 3, and asymptotic guarantees for two-sample testing and minimum distance estimation using W p are provided.

Central limit theorems for entropy-regularized optimal transport on finite spaces and statistical applications

This work derives the distributional limits of the empirical Sinkhorn divergence and its centered version (Sinkhorn loss) and proposes a bootstrap procedure which allows to obtain new test statistics for measuring the discrepancies between multivariate probability distributions.

On Projection Robust Optimal Transport: Sample Complexity and Model Misspecification

The viewpoint of projection robust (PR) OT is adopted, which seeks to maximize the OT cost between two measures by choosing a $k$-dimensional subspace onto which they can be projected, and an asymptotic guarantee of two types of minimum PRW estimators and a central limit theorem for max-sliced Wasserstein estimator under model misspecification are formulated.

Asymptotic Guarantees for Generative Modeling Based on the Smooth Wasserstein Distance

This work conducts a thorough statistical study of the minimum smooth Wasserstein estimators (MSWEs), first proving the estimator's measurability and asymptotic consistency, and characterize the limit distribution of the optimal model parameters and their associated minimal SWD.

Convergence of Gaussian-smoothed optimal transport distance with sub-gamma distributions and dependent samples

This paper provides convergence guarantees for estimating the GOT distance under more general settings and proves convergence for dependent samples, only requiring a condition on the pairwise dependence of the samples measured by the covariance of the feature map of a kernel space.

Plugin Estimation of Smooth Optimal Transport Maps

A central limit theorem is derived for a density plugin estimator of the squared Wasserstein distance, which is centered at its population counterpart when the underlying distributions have sufficiently smooth densities.

Approximate Bayesian computation with the Wasserstein distance

This work proposes to avoid the use of summaries and the ensuing loss of information by instead using the Wasserstein distance between the empirical distributions of the observed and synthetic data, and generalizes the well‐known approach of using order statistics within approximate Bayesian computation to arbitrary dimensions.

On parameter estimation with the Wasserstein distance

These results cover the misspecified setting, in which the data-generating process is not assumed to be part of the family of distributions described by the model, and some difficulties arising in the numerical approximation of these estimators are discussed.

Near-optimal estimation of smooth transport maps with kernel sums-of-squares

This paper proposes the first tractable algorithm for which the statistical L error on the maps nearly matches the existing minimax lower-bounds for smooth map estimation, and leads to an algorithm which has dimension-free polynomial rates in the number of samples, with potentially exponentially dimension-dependent constants.