Hierarchical Sliced Wasserstein Distance

@article{Nguyen2022HierarchicalSW,
  title={Hierarchical Sliced Wasserstein Distance},
  author={Khai Nguyen and Tongzheng Ren and Huy Nguyen and Litu Rout and Tan Minh Nguyen and Nhat Ho},
  journal={ArXiv},
  year={2022},
  volume={abs/2209.13570}
}
Sliced Wasserstein (SW) distance has been widely used in different application scenarios since it can be scaled to a large number of supports without suffering from the curse of dimensionality. The value of sliced Wasserstein distance is the average of transportation cost between one-dimensional representations (projections) of original measures that are obtained by Radon Transform (RT). Despite its efficiency in the number of supports, estimating the sliced Wasserstein requires a relatively large… 

Figures and Tables from this paper

Improving Generative Flow Networks with Path Regularization

A novel path regularization method based on optimal transport theory that places prior constraints on the underlying structure of the G FlowNets to help the GFlowNets better discover the latentructure of the target distribution or enhance its ability to explore the environment in the context of active learning.

References

SHOWING 1-10 OF 72 REFERENCES

Augmented Sliced Wasserstein Distances

This work proposes a new family of distance metrics, called augmented sliced Wasserstein distances (ASWDs), constructed by first mapping samples to higher-dimensional hypersurfaces parameterized by neural networks, and provides the condition under which the ASWD is a valid metric and shows it can be obtained by an injective neural network architecture.

Distributional Sliced-Wasserstein and Applications to Generative Modeling

This paper proposes a novel distance that finds optimal penalized probability measure over the slices, named Distributional Sliced-Wasserstein distance (DSWD), and shows that the DSWD is a generalization of both SWD and Max-SWD, and the proposed distance could be found by searching for the push-forward measure over a set of measures satisfying some certain constraints.

Generalized Sliced Wasserstein Distances

The generalized Radon transform is utilized to define a new family of distances for probability measures, which are called generalized sliced-Wasserstein (GSW) distances, and it is shown that, similar to the SW distance, the GSW distance can be extended to a maximum GSW (max- GSW) distance.

Sliced Wasserstein Generative Models

  • Jiqing WuZhiwu Huang L. Gool
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
This paper proposes to approximate SWDs with a small number of parameterized orthogonal projections in an end-to-end deep learning fashion and designs two types of differentiable SWD blocks to equip modern generative frameworks---Auto-Encoders and Generative Adversarial Networks.

Max-Sliced Wasserstein Distance and Its Use for GANs

This work demonstrates that the recently proposed sliced Wasserstein distance trains GANs on high-dimensional images up to a resolution of 256x256 easily and develops the max-sliced Wasserenstein distance, which enjoys compelling sample complexity while reducing projection complexity, albeit necessitating a max estimation.

Sliced Gromov-Wasserstein

A novel OT discrepancy is defined that can deal with large scale distributions via a slicing approach and is demonstrated to have ability to tackle similar problems as GW while being several order of magnitudes faster to compute.

Revisiting Sliced Wasserstein on Images: From Vectorization to Convolution

Convolution sliced Wasserstein (CSW) is derived via incorporating stride, dilation, and non-linear activation function into the convolution operators and is demonstrated to have favorable performance in comparing probability measures over images and in training deep generative modeling on images.

Fast Approximation of the Sliced-Wasserstein Distance Using Concentration of Random Projections

This work adopts a new perspective to approximate SW by making use of the concentration of measure phenomenon, and develops a simple deterministic approximation that is both accurate and easy to use compared to the usual Monte Carlo approximation.

Sliced-Wasserstein Gradient Flows

It is argued that this method is more flexible than JKO-ICNN, since SW enjoys a closedform differentiable approximation and the density at each step can be parameterized by any generative model which alleviates the computational burden and makes it tractable in higher dimensions.

Sliced Wasserstein Variational Inference

This work proposes a new variational inference method by minimizing sliced Wasserstein distance–a valid metric arising from optimal transport and does not require a tractable density function of variational distributions so that approximating families can be amortized by generators like neural networks.
...