Denoising Likelihood Score Matching for Conditional Score-based Data Generation

@article{Chao2022DenoisingLS,
  title={Denoising Likelihood Score Matching for Conditional Score-based Data Generation},
  author={Chen-Hao Chao and Wei-Fang Sun and Bo Wun Cheng and Yi-Chen Lo and Chia-Che Chang and Yu-Lun Liu and Yu-Lin Chang and Chia-Ping Chen and Chun-Yi Lee},
  journal={ArXiv},
  year={2022},
  volume={abs/2203.14206}
}
Many existing conditional score-based data generation methods utilize Bayes’ theorem to decompose the gradients of a log posterior density into a mixture of scores. These methods facilitate the training procedure of conditional score models, as a mixture of scores can be separately estimated using a score model and a classifier. However, our analysis indicates that the training objectives for the classifier in these methods may lead to a serious score mismatch issue, which corresponds to the… 

Figures and Tables from this paper

Enhancing Diffusion-Based Image Synthesis with Robust Classifier Guidance

TLDR
In experiments on the highly challenging and diverse ImageNet dataset, the scheme introduces significantly more intelligible intermediate gradients, better alignment with theoretical findings, as well as im- proved generation results under several evaluation metrics.

Diffusion Models in Vision: A Survey

TLDR
A multi-perspective categorization of diffusion models applied in computer vision, including variational auto-encoders, generative adversarial networks, energy-based models, autoregressive models and normalizing models is introduced.

Quasi-Conservative Score-based Generative Models

Existing Score-based Generative Models (SGMs) can be categorized into constrained SGMs (CSGMs) or unconstrained SGMs (USGMs) according to their parameterization approaches. CSGMs model the

References

SHOWING 1-10 OF 23 REFERENCES

Sliced Score Matching: A Scalable Approach to Density and Score Estimation

TLDR
It is demonstrated that sliced score matching can learn deep energy-based models effectively, and can produce accurate score estimates for applications such as variational inference with implicit distributions and training Wasserstein Auto-Encoders.

Classification Accuracy Score for Conditional Generative Models

TLDR
This work uses class-conditional generative models from a number of model classes---variational autoencoders, autoregressive models, and generative adversarial networks (GANs)---to infer the class labels of real data and reveals some surprising results not identified by traditional metrics.

Generative Modeling by Estimating Gradients of the Data Distribution

TLDR
A new generative model where samples are produced via Langevin dynamics using gradients of the data distribution estimated with score matching, which allows flexible model architectures, requires no sampling during training or the use of adversarial methods, and provides a learning objective that can be used for principled model comparisons.

Estimation of Non-Normalized Statistical Models by Score Matching

TLDR
While the estimation of the gradient of log-density function is, in principle, a very difficult non-parametric problem, it is proved a surprising result that gives a simple formula that simplifies to a sample average of a sum of some derivatives of the log- density given by the model.

Improved Precision and Recall Metric for Assessing Generative Models

TLDR
This work presents an evaluation metric that can separately and reliably measure both the quality and coverage of the samples produced by a generative model and the perceptual quality of individual samples, and extends it to study latent space interpolations.

A Connection Between Score Matching and Denoising Autoencoders

TLDR
A proper probabilistic model for the denoising autoencoder technique is defined, which makes it in principle possible to sample from them or rank examples by their energy, and a different way to apply score matching that is related to learning to denoise and does not require computing second derivatives is suggested.

Score-Based Generative Modeling through Stochastic Differential Equations

TLDR
This work presents a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by Slowly removing the noise.

Improved Techniques for Training Score-Based Generative Models

TLDR
This work provides a new theoretical analysis of learning and sampling from score models in high dimensional spaces, explaining existing failure modes and motivating new solutions that generalize across datasets.

Reliable Fidelity and Diversity Metrics for Generative Models

TLDR
It is shown that even the latest version of the precision and recall metrics are not reliable yet, and density and coverage metrics provide more interpretable and reliable signals for practitioners than the existing metrics.

GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium

TLDR
This work proposes a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions and introduces the "Frechet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score.