• Corpus ID: 211146563

# Distributional Sliced-Wasserstein and Applications to Generative Modeling

@article{Nguyen2020DistributionalSA,
title={Distributional Sliced-Wasserstein and Applications to Generative Modeling},
author={Khai Nguyen and Nhat Ho and Tung Pham and Hung Hai Bui},
journal={ArXiv},
year={2020},
volume={abs/2002.07367}
}
• Published 18 February 2020
• Computer Science
• ArXiv
Sliced-Wasserstein distance (SWD) and its variation, Max Sliced-Wasserstein distance (Max-SWD), have been widely used in the recent years due to their fast computation and scalability when the probability measures lie in very high dimension. However, these distances still have their weakness, SWD requires a lot of projection samples because it uses the uniform distribution to sample projecting directions, Max-SWD uses only one projection, causing it to lose a large amount of information. In…

## Figures and Tables from this paper

• Computer Science
ICLR
• 2022
This work proposes a new family of distance metrics, called augmented sliced Wasserstein distances (ASWDs), constructed by first mapping samples to higher-dimensional hypersurfaces parameterized by neural networks, and provides the condition under which the ASWD is a valid metric and shows it can be obtained by an injective neural network architecture.
• Computer Science
Electronic Journal of Statistics
• 2022
Confidence intervals for the Sliced Wasserstein distance are constructed which have finite-sample validity under no assumptions or under mild moment assumptions and are adaptive in length to the regularity of the underlying distributions.
• Computer Science
NeurIPS
• 2020
A first step into a computational theory of the PRW distance is provided and the links between optimal transport and Riemannian optimization are provided.
• Computer Science
AISTATS
• 2021
The viewpoint of projection robust (PR) OT is adopted, which seeks to maximize the OT cost between two measures by choosing a $k$-dimensional subspace onto which they can be projected, and an asymptotic guarantee of two types of minimum PRW estimators and a central limit theorem for max-sliced Wasserstein estimator under model misspecification are formulated.
• Computer Science
ArXiv
• 2020
A novel non-adversarial framework called Tessellated Wasserstein Auto-encoders (TWAE) is developed to tessellate the support of the target distribution into a given number of regions by the centroidal VoronoiTessellation (CVT) technique and design batches of data according to the tessellingation instead of random shuffling for accurate computation of discrepancy.
• Computer Science
ArXiv
• 2020
This work introduces an Encoded Prior Sliced Wasserstein AutoEncoder (EPSWAE), wherein an additional prior-encoder network learns an unconstrained prior to match the encoded data manifold, and applies it to 3D-spiral, MNIST, and CelebA datasets, showing that its latent representations and interpolations are comparable to the state of the art on equivalent architectures.
• Computer Science
2021 IEEE/CVF International Conference on Computer Vision (ICCV)
• 2021
Experiments show that the sliced Wasserstein distance allows the neural network to learn a more efficient representation compared to the Chamfer discrepancy, which is demonstrated on several tasks in 3D computer vision including training a point cloud autoencoder, generative modeling, transfer learning, and point cloud registration.
• Computer Science
ArXiv
• 2021
The proposed Batch of Mini-batches Optimal Transport (BoMb-OT) is a novel mini-batching scheme for optimal transport that can be formulated as a well-defined distance on the space of probability measures and provides a better objective loss than m-OT for doing approximate Bayesian computation, estimating parameters of interest in parametric generative models, and learning non-parametricGenerative models with gradient flow.
• Computer Science
• 2023
The interpretation of DDMs in terms of image restoration (IR) is established and a multi-scale training is proposed, which improves the performance compared to the diffusion process, by taking advantage of the flexibility of the forward process.
• Computer Science
ArXiv
• 2022
The PAC-Bayesian theory and the central observation that SW actually hinges on a slice-distribution-dependent Gibbs risk are leveraged to bring new contributions to this line of research.

## References

SHOWING 1-10 OF 53 REFERENCES

• Computer Science
NeurIPS
• 2019
The generalized Radon transform is utilized to define a new family of distances for probability measures, which are called generalized sliced-Wasserstein (GSW) distances, and it is shown that, similar to the SW distance, the GSW distance can be extended to a maximum GSW (max- GSW) distance.
• Computer Science
ICLR
• 2022
This work proposes a new family of distance metrics, called augmented sliced Wasserstein distances (ASWDs), constructed by first mapping samples to higher-dimensional hypersurfaces parameterized by neural networks, and provides the condition under which the ASWD is a valid metric and shows it can be obtained by an injective neural network architecture.
• Computer Science
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2019
This paper proposes to approximate SWDs with a small number of parameterized orthogonal projections in an end-to-end deep learning fashion and designs two types of differentiable SWD blocks to equip modern generative frameworks---Auto-Encoders and Generative Adversarial Networks.
• Computer Science, Mathematics
NeurIPS
• 2019
A central limit theorem is proved, which characterizes the asymptotic distribution of the estimators and establishes a convergence rate of $\sqrt{n}$, where $n$ denotes the number of observed data points.
• Computer Science
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2019
This work demonstrates that the recently proposed sliced Wasserstein distance trains GANs on high-dimensional images up to a resolution of 256x256 easily and develops the max-sliced Wasserenstein distance, which enjoys compelling sample complexity while reducing projection complexity, albeit necessitating a max estimation.
• Computer Science
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
• 2018
This work considers an alternative formulation for generative modeling based on random projections which, in its simplest form, results in a single objective rather than a saddle-point formulation and finds its approach to be significantly more stable compared to even the improved Wasserstein GAN.
• Physics
• 2019
The sliced Wasserstein and more recently max-sliced Wasserstein metrics $\mW_p$ have attracted abundant attention in data sciences and machine learning due to its advantages to tackle the curse of
• Computer Science
NeurIPS
• 2020
A first step into a computational theory of the PRW distance is provided and the links between optimal transport and Riemannian optimization are provided.
• Computer Science
ICML
• 2019
This work proposes a "max-min" robust variant of the Wasserstein distance by considering the maximal possible distance that can be realized between two measures, assuming they can be projected orthogonally on a lower $k$-dimensional subspace.
• Computer Science
ArXiv
• 2018
Sliced-Wasserstein Autoencoders (SWAE) are introduced, which are generative models that enable one to shape the distribution of the latent space into any samplable probability distribution without the need for training an adversarial network or defining a closed-form for the distribution.