Corpus ID: 165164016

Subspace Detours: Building Transport Plans that are Optimal on Subspace Projections

@inproceedings{Muzellec2019SubspaceDB,
  title={Subspace Detours: Building Transport Plans that are Optimal on Subspace Projections},
  author={Boris Muzellec and Marco Cuturi},
  booktitle={NeurIPS},
  year={2019}
}
Computing optimal transport (OT) between measures in high dimensions is doomed by the curse of dimensionality. A popular approach to avoid this curse is to project input measures on lower-dimensional subspaces (1D lines in the case of sliced Wasserstein distances), solve the OT problem between these reduced measures, and settle for the Wasserstein distance between these reductions, rather than that between the original measures. This approach is however difficult to extend to the case in which… Expand
Efficient estimates of optimal transport via low-dimensional embeddings Conference Submissions
  • 2020
Optimal transport distances (OT) have been widely used in recent work in Machine Learning as ways to compare probability distributions. These are costly to compute when the data lives in highExpand
Optimal transport mapping via input convex neural networks
TLDR
This approach ensures that the transport mapping the authors find is optimal independent of how they initialize the neural networks, as gradient of a convex function naturally models a discontinuous transport mapping. Expand
Knothe-Rosenblatt transport for Unsupervised Domain Adaptation
TLDR
This paper presents Knothe-Rosenblatt Domain Adaptation, an approach tailored to moderate-dimensional tabular problems which are hugely important in industrial applications and less well-served by the plethora of methods designed for image and language data. Expand
AUTOENCODERS WITH SPHERICAL SLICED FUSED
Relational regularized autoencoder (RAE) is a framework to learn the distribution of data by minimizing a reconstruction loss together with a relational regularization on the latent space. A recentExpand
Deep Diffusion-Invariant Wasserstein Distributional Classification
TLDR
DeepWDC can substantially enhance the accuracy of several baseline deterministic classification methods and outperforms state-of-the-art-methods on 2D and 3D data containing various types of perturbations. Expand
O C ] 1 7 O ct 2 01 9 OPTIMAL TRANSPORT AND BARYCENTERS FOR DENDRITIC MEASURES
We introduce and study a variant of the Wasserstein distance on the space of probability measures, specially designed to deal with measures whose support has a dendritic, or treelike structure with aExpand
On Transportation of Mini-batches: A Hierarchical Approach
  • Khai Nguyen, Dang Nguyen, +5 authors Nhat Ho
  • Mathematics, Computer Science
  • 2021
TLDR
A novel mini-batching scheme for optimal transport, named Batch of Mini-batches Optimal Transport (BoMb-OT), that achieves a favorable performance in deep learning models such as deep generative models and deep domain adaptation and also yields either a lower quantitative result or a better qualitative result than the m-OT. Expand
Sliced $\mathcal{L}_2$ Distance for Colour Grading
TLDR
A new method with L2 distance is proposed that maps one N -dimensional distribution to another, taking into account available information about correspondences, and is applied to colour transfer between two images that exhibit overlapped scenes. Expand
A Review on Modern Computational Optimal Transport Methods with Applications in Biomedical Research
TLDR
This review presents some cutting-edge computational optimal transport methods with a focus on the regularization- based methods and the projection-based methods and discusses their real-world applications in biomedical research. Expand
A Data Dependent Algorithm for Querying Earth Mover's Distance with Low Doubling Dimension
TLDR
This paper proposes a novel ``data-dependent'' algorithm to avoid directly computing the EMD between A and B so as to solve this query problem more efficiently and can save a large amount of running time comparing with existing EMD algorithms. Expand
...
1
2
...

References

SHOWING 1-10 OF 41 REFERENCES
Subspace Robust Wasserstein distances
TLDR
This work proposes a "max-min" robust variant of the Wasserstein distance by considering the maximal possible distance that can be realized between two measures, assuming they can be projected orthogonally on a lower $k$-dimensional subspace. Expand
Regularity as Regularization: Smooth and Strongly Convex Brenier Potentials in Optimal Transport
TLDR
This work gives algorithms operating on two discrete measures that can recover nearly optimal transport maps with small distortion, or equivalently, nearly optimal Brenier potentials that are strongly convex and smooth. Expand
Generalizing Point Embeddings using the Wasserstein Space of Elliptical Distributions
TLDR
Wasserstein elliptical embeddings are presented, which consist in embedding objects as elliptical probability distributions, namely distributions whose densities have elliptical level sets, and shown to be more intuitive and better behaved numerically than the alternative choice of Gaussianembeddings with the Kullback-Leibler divergence. Expand
Large Scale Optimal Transport and Mapping Estimation
TLDR
This paper proposes a stochastic dual approach of regularized OT, and shows empirically that it scales better than a recent related approach when the amount of samples is very large, and estimates a Monge map as a deep neural network learned by approximating the barycentric projection of the previously-obtained OT plan. Expand
Sinkhorn Distances: Lightspeed Computation of Optimal Transport
TLDR
This work smooths the classic optimal transport problem with an entropic regularization term, and shows that the resulting optimum is also a distance which can be computed through Sinkhorn's matrix scaling algorithm at a speed that is several orders of magnitude faster than that of transport solvers. Expand
Learning Generative Models with Sinkhorn Divergences
TLDR
This paper presents the first tractable computational method to train large scale generative models using an optimal transport loss, and tackles three issues by relying on two key ideas: entropic smoothing, which turns the original OT loss into one that can be computed using Sinkhorn fixed point iterations; and algorithmic (automatic) differentiation of these iterations. Expand
On the Bures–Wasserstein distance between positive definite matrices
The metric $d(A,B)=\left[ \tr\, A+\tr\, B-2\tr(A^{1/2}BA^{1/2})^{1/2}\right]^{1/2}$ on the manifold of $n\times n$ positive definite matrices arises in various optimisation problems, in quantumExpand
Regularized Discrete Optimal Transport
TLDR
A generalization of discrete Optimal Transport that includes a regularity penalty and a relaxation of the bijectivity constraint is introduced and an illustrative application of this discrete regularized transport to color transfer between images is shown. Expand
Sliced Wasserstein Generative Models
  • Jiqing Wu, Z. Huang, +4 authors L. Gool
  • Computer Science
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
This paper proposes to approximate SWDs with a small number of parameterized orthogonal projections in an end-to-end deep learning fashion and designs two types of differentiable SWD blocks to equip modern generative frameworks---Auto-Encoders and Generative Adversarial Networks. Expand
Generative Modeling Using the Sliced Wasserstein Distance
TLDR
This work considers an alternative formulation for generative modeling based on random projections which, in its simplest form, results in a single objective rather than a saddle-point formulation and finds its approach to be significantly more stable compared to even the improved Wasserstein GAN. Expand
...
1
2
3
4
5
...