# Approximate Bayesian Computation with the Sliced-Wasserstein Distance

@article{Nadjahi2019ApproximateBC,
title={Approximate Bayesian Computation with the Sliced-Wasserstein Distance},
author={Kimia Nadjahi and Valentin De Bortoli and Alain Durmus and Roland Badeau and Umut Simsekli},
journal={ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
year={2019},
pages={5470-5474}
}
• Published 28 October 2019
• Computer Science
• ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Approximate Bayesian Computation (ABC) is a popular method for approximate inference in generative models with intractable but easy-to-sample likelihood. It constructs an approximate posterior distribution by finding parameters for which the simulated data are close to the observations in terms of summary statistics. These statistics are defined beforehand and might induce a loss of information, which has been shown to deteriorate the quality of the approximation. To overcome this problem…

## Figures and Tables from this paper

• Computer Science
NeurIPS
• 2021
This work adopts a new perspective to approximate SW by making use of the concentration of measure phenomenon, and develops a simple deterministic approximation that is both accurate and easy to use compared to the usual Monte Carlo approximation.
• Computer Science
ArXiv
• 2022
The PAC-Bayesian theory and the central observation that SW actually hinges on a slice-distribution-dependent Gibbs risk are leveraged to bring new contributions to this line of research.
• Computer Science
ArXiv
• 2022
This work quantifies sliced Wasserstein distances scalability from three key aspects: empirical convergence rates; robustness to data contamination; and efficient computational methods; and characterize minimax optimal, dimension-free robust estimation risks, and show an equivalence between robust 1-Wasserstein estimation and robust mean estimation.
• Computer Science
Electronic Journal of Statistics
• 2022
Confidence intervals for the Sliced Wasserstein distance are constructed which have finite-sample validity under no assumptions or under mild moment assumptions and are adaptive in length to the regularity of the underlying distributions.
• Computer Science
• 2021
Results are sample complexity bounds which demonstrate that, under smoothness conditions on the generator, QMC can signiﬁcantly reduce the number of samples required to obtain a given level of accuracy when using three of the most common discrepancies: the maximum mean discrepancy, the Wasserstein distance, and the Sinkhorn divergence.
• Computer Science
ArXiv
• 2022
This work proposes to form deterministic and fast approximations of the generalized sliced Wasserstein distance by using the concentration of random projections when the deﬁning functions are polynomial function, circular function, and neural network type function.
• Computer Science
ArXiv
• 2020
A broad family of probability metrics, coined as Generalized Sliced Probability Metrics (GSPMs), that are deeply rooted in the generalized Radon transform are introduced and it is shown that under mild assumptions, the gradient flow converges to the global optimum.
• Computer Science
ArXiv
• 2022
The metricity of Hierarchical Sliced Wasserstein (HSW) distance is derived by proving the injectivity of HRT and investigating the theoretical properties of HSW including its connection to SW variants and its computational and sample complexities.
• Computer Science
ArXiv
• 2022
Convolution sliced Wasserstein (CSW) is derived via incorporating stride, dilation, and non-linear activation function into the convolution operators and is demonstrated to have favorable performance in comparing probability measures over images and in training deep generative modeling on images.
• Computer Science
ICML
• 2022
This work proposes a novel mini-batch scheme for optimal transport, named Batch of Mini-batches Optimal Transport (BoMb-OT), that satisfies the optimal coupling between mini-batched and it can be seen as an approximation to a well-deﬁned distance on the space of probability measures.

## References

SHOWING 1-10 OF 30 REFERENCES

• Computer Science
Journal of the Royal Statistical Society: Series B (Statistical Methodology)
• 2019
This work proposes to avoid the use of summaries and the ensuing loss of information by instead using the Wasserstein distance between the empirical distributions of the observed and synthetic data, and generalizes the well‐known approach of using order statistics within approximate Bayesian computation to arbitrary dimensions.
• Computer Science
AISTATS
• 2016
This paper proposes a fully nonparametric ABC paradigm which circumvents the need for manually selecting summary statistics, and uses maximum mean discrepancy (MMD) as a dissimilarity measure between the distributions over observed and simulated data.
This work adopts a Kullback-Leibler divergence estimator to assess the data discrepancy and achieves a comparable or higher quasiposterior quality, compared to the existing methods using other discrepancy measures.
• Computer Science, Mathematics
NeurIPS
• 2019
A central limit theorem is proved, which characterizes the asymptotic distribution of the estimators and establishes a convergence rate of $\sqrt{n}$, where $n$ denotes the number of observed data points.
• Computer Science
ICML
• 2019
This study proposes a novel parameter-free algorithm for learning the underlying distributions of complicated datasets and sampling from them based on a functional optimization problem, which aims at finding a measure that is close to the data distribution as much as possible and also expressive enough for generative modeling purposes.
• Computer Science
NeurIPS
• 2019
The generalized Radon transform is utilized to define a new family of distances for probability measures, which are called generalized sliced-Wasserstein (GSW) distances, and it is shown that, similar to the SW distance, the GSW distance can be extended to a maximum GSW (max- GSW) distance.
• Computer Science
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
• 2018
This work considers an alternative formulation for generative modeling based on random projections which, in its simplest form, results in a single objective rather than a saddle-point formulation and finds its approach to be significantly more stable compared to even the improved Wasserstein GAN.
• Computer Science
• 2012
This work shows how to construct appropriate summary statistics for ABC in a semi‐automatic manner, and shows that optimal summary statistics are the posterior means of the parameters.
• Computer Science
Genetics
• 2002
A key advantage of the method is that the nuisance parameters are automatically integrated out in the simulation step, so that the large numbers of nuisance parameters that arise in population genetics problems can be handled without difficulty.
• Mathematics, Computer Science
Bernoulli
• 2019
This work considers the fundamental question of how quickly the empirical measure obtained from independent samples from $\mu$ approaches $n$ in the Wasserstein distance of any order and proves sharp asymptotic and finite-sample results for this rate of convergence for general measures on general compact metric spaces.