# Potential Flow Generator with $L_2$ Optimal Transport Regularity for Generative Models

@article{Yang2020PotentialFG, title={Potential Flow Generator with \$L\_2\$ Optimal Transport Regularity for Generative Models}, author={Liu Yang and George Em Karniadakis}, journal={IEEE transactions on neural networks and learning systems}, year={2020}, volume={PP} }

We propose a potential flow generator with L₂ optimal transport regularity, which can be easily integrated into a wide range of generative models, including different versions of generative adversarial networks (GANs) and normalizing flow models. With only a slight augmentation to the original generator loss functions, our generator not only tries to transport the input distribution to the target one but also aims to find the one with minimum L₂ transport cost. We show the effectiveness of our…

## Figures, Tables, and Topics from this paper

## 19 Citations

Optimal transport mapping via input convex neural networks

- Computer Science, MathematicsICML
- 2020

This approach ensures that the transport mapping the authors find is optimal independent of how they initialize the neural networks, as gradient of a convex function naturally models a discontinuous transport mapping.

Sparse Flows: Pruning Continuous-depth Models

- Computer ScienceArXiv
- 2021

This work designs a framework to decipher the internal dynamics of these continuous depth models by pruning their network architectures, and empirical results suggest that pruning improves generalization for neural ODEs in generative modeling.

Jacobian Determinant of Normalizing Flows

- Computer Science, MathematicsArXiv
- 2021

It is shown that the Jacobian determinant mapping is unique for the given distributions, hence the likelihood objective of flows has a unique global optimum.

How to train your neural ODE

- Mathematics, Computer ScienceICML 2020
- 2020

This paper introduces a theoretically-grounded combination of both optimal transport and stability regularizations which encourage neural ODEs to prefer simpler dynamics out of all the dynamics that solve a problem well, resulting in considerably decreasing wall-clock time without loss in performance.

Augmented KRnet for density estimation and approximation

- Mathematics, Computer ScienceArXiv
- 2021

The augmented KRnet is proposed including both discrete and continuous models and can be reformulated as the discretization of a neural ODE, where the exact invertibility is kept such that the adjoint method can be formulated with respect to the Discretized ODE to obtain the exact gradient.

A Neural Network Approach for High-Dimensional Optimal Control

- 2021

We propose a neural network approach for solving high-dimensional optimal control problems arising in real-time applications. Our approach yields controls in a feedback form and can therefore handle…

A machine learning framework for solving high-dimensional mean field game and mean field control problems

- Computer Science, MedicineProceedings of the National Academy of Sciences
- 2020

This paper provides a flexible machine learning framework for the numerical solution of potential MFG and MFC models by combining Lagrangian and Eulerian viewpoints and leveraging recent advances from machine learning.

A Neural Network Approach for Real-Time High-Dimensional Optimal Control

- Mathematics
- 2021

We propose a neural network approach for solving high-dimensional optimal control problems arising in real-time applications. Our approach yields controls in a feedback form, where the policy…

Scalable Computation of Monge Maps with General Costs

- Computer Science, MathematicsArXiv
- 2021

This paper presents a scalable algorithm based on a weak form of the optimal transport problem, thus it only requires samples from the marginals instead of their analytic expressions, and can accommodate optimal transport between two distributions with different dimensions.

An Introduction to Deep Generative Modeling

- Computer ScienceArXiv
- 2021

An introduction to DGMs is provided and a concise mathematical framework for modeling the three most popular approaches: normalizing flows (NF), variational autoencoders (VAE), and generative adversarial networks (GAN) is provided; the advantages and disadvantages are illustrated using numerical experiments.

## References

SHOWING 1-10 OF 29 REFERENCES

Adversarial Computation of Optimal Transport Maps

- Computer Science, MathematicsArXiv
- 2019

This work proposes a generative adversarial model in which the discriminator's objective is the $2-Wasserstein metric, and shows that during training, the generator follows the $W_2$-geodesic between the initial and the target distributions, and reproduces an optimal map at the end of training.

Scalable Unbalanced Optimal Transport using Generative Adversarial Networks

- Computer Science, MathematicsICLR
- 2019

This paper presents a scalable method for unbalanced optimal transport (OT) based on the generative-adversarial framework, and proposes an algorithm for solving this problem based on stochastic alternating gradient updates, similar in practice to GANs.

Glow: Generative Flow with Invertible 1x1 Convolutions

- Computer Science, MathematicsNeurIPS
- 2018

Glow, a simple type of generative flow using an invertible 1x1 convolution, is proposed, demonstrating that a generative model optimized towards the plain log-likelihood objective is capable of efficient realistic-looking synthesis and manipulation of large images.

Improved Training of Wasserstein GANs

- Computer Science, MathematicsNIPS
- 2017

This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.

Improving GANs Using Optimal Transport

- Computer Science, MathematicsICLR
- 2018

Optimal Transport GAN (OT-GAN), a variant of generative adversarial nets minimizing a new metric measuring the distance between the generator distribution and the data distribution, resulting in a highly discriminative distance function with unbiased mini-batch gradients is presented.

Generative Modeling Using the Sliced Wasserstein Distance

- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018

This work considers an alternative formulation for generative modeling based on random projections which, in its simplest form, results in a single objective rather than a saddle-point formulation and finds its approach to be significantly more stable compared to even the improved Wasserstein GAN.

Generative Adversarial Nets

- Computer ScienceNIPS
- 2014

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a…

Image-to-Image Translation with Conditional Adversarial Networks

- Computer Science2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2017

Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.

DualGAN: Unsupervised Dual Learning for Image-to-Image Translation

- Computer Science2017 IEEE International Conference on Computer Vision (ICCV)
- 2017

A novel dual-GAN mechanism is developed, which enables image translators to be trained from two sets of unlabeled images from two domains, and can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data.

Wasserstein Generative Adversarial Networks

- Computer ScienceICML
- 2017

This work introduces a new algorithm named WGAN, an alternative to traditional GAN training that can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches.