• Corpus ID: 234336396

Learning High-Dimensional Distributions with Latent Neural Fokker-Planck Kernels

@article{Zhou2021LearningHD,
  title={Learning High-Dimensional Distributions with Latent Neural Fokker-Planck Kernels},
  author={Yufan Zhou and Changyou Chen and Jinhui Xu},
  journal={ArXiv},
  year={2021},
  volume={abs/2105.04538}
}
Learning high-dimensional distributions is an important yet challenging problem in machine learning with applications in various domains. In this paper, we introduce new techniques to formulate the problem as solving Fokker-Planck equation in a lower-dimensional latent space, aiming to mitigate challenges in high-dimensional data space. Our proposed model consists of latentdistribution morphing, a generator and a parameterized Fokker-Planck kernel function. One fascinating property of our model… 
Wavelet Transform-assisted Adaptive Generative Modeling for Colorization
TLDR
A novel scheme that exploiting the score-based generative model in wavelet domain to address the issue of colorization robustness and diversity by taking advantage of the multi-scale and multi-channel representation via wavelet transform.

References

SHOWING 1-10 OF 53 REFERENCES
KernelNet: A Data-Dependent Kernel Parameterization for Deep Generative Modeling
TLDR
This paper proposes a framework to construct and learn a data-dependent kernel based on random features and implicit spectral distributions that are parameterized by deep neural networks that can be applied to deep generative modeling in various scenarios.
Consistency Regularization for Generative Adversarial Networks
TLDR
This work proposes a simple, effective training stabilizer based on the notion of consistency regularization, which improves state-of-the-art FID scores for conditional generation and achieves the best F ID scores for unconditional image generation compared to other regularization methods on CIFAR-10 and CelebA.
Learning Generative Models with Sinkhorn Divergences
TLDR
This paper presents the first tractable computational method to train large scale generative models using an optimal transport loss, and tackles three issues by relying on two key ideas: entropic smoothing, which turns the original OT loss into one that can be computed using Sinkhorn fixed point iterations; and algorithmic (automatic) differentiation of these iterations.
Generative Modeling by Estimating Gradients of the Data Distribution
TLDR
A new generative model where samples are produced via Langevin dynamics using gradients of the data distribution estimated with score matching, which allows flexible model architectures, requires no sampling during training or the use of adversarial methods, and provides a learning objective that can be used for principled model comparisons.
Refining Deep Generative Models via Wasserstein Gradient Flows
TLDR
Empirical results demonstrate that DGflow leads to significant improvement in the quality of generated samples for a variety of generative models, outperforming the state-of-the-art Discriminator Optimal Transport (DOT) and Discriminators Driven Latent Sampling (DDLS) methods.
KALE: When Energy-Based Learning Meets Adversarial Training
TLDR
This work usesLegendre duality to provide a variational lowerbound for the Kullback-Leibler divergence, and shows that this estimator, the KL Approximate Lower-bound Estimate (KALE), provides a maximum likelihood estimate (MLE) and extends this procedure to adversarial training.
Implicit Kernel Learning
TLDR
This paper explores learning the spectral distribution of kernel via implicit generative models parametrized by deep neural networks via Implicit Kernel Learning (IKL), which is simple to train and inference is performed via sampling random Fourier features.
Improved Training of Wasserstein GANs
TLDR
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.
Training Deep Energy-Based Models with f-Divergence Minimization
TLDR
This paper proposes a general variational framework termed f-EBM to train EBMs using any desired f-divergence, and introduces a corresponding optimization algorithm and proves its local convergence property with non-linear dynamical systems theory.
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a
...
...