Corpus ID: 235390957

Score Matching Model for Unbounded Data Score

@article{Kim2021ScoreMM,
  title={Score Matching Model for Unbounded Data Score},
  author={Dongjun Kim and Seungjae Shin and Kyungwoo Song and Wanmo Kang and Il-Chul Moon},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.05527}
}
Recent advance in diffusion models incorporates the Stochastic Differential Equation (SDE), which brings the state-of-the art performance on image generation tasks. This paper improves such diffusion models by analyzing the model at the zero diffusion time. In real datasets, the score function diverges as the diffusion time (t) decreases to zero, and this observation leads an argument that the score estimation fails at t = 0 with any neural network structure. Subsequently, we introduce… Expand
Score-based diffusion models for accelerated MRI
  • Hyungjin Chung, Jong-Chul Ye
  • Computer Science, Engineering
  • ArXiv
  • 2021
TLDR
This work introduces a way to sample data from a conditional distribution given the measurements, such that the model can be readily used for solving inverse problems in imaging, especially for accelerated MRI. Expand
Score-based Generative Modeling in Latent Space
TLDR
The Latent Score-based Generative Model (LSGM) is proposed, a novel approach that trains SGMs in a latent space, relying on the variational autoencoder framework, and achieves state-of-the-art likelihood on the binarized OMNIGLOT dataset. Expand
Densely connected normalizing flows
TLDR
This work incrementally padding intermediate representations with noise in order to express intra-unit affine coupling as a fusion of a densely connected block and Nyström self-attention and reveals state-of-the-art density estimation among all generative models under moderate computing budgets. Expand
Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training
  • Minguk Kang, Woohyeon Shim, Minsu Cho, Jaesik Park
  • Computer Science
  • ArXiv
  • 2021
TLDR
This paper identifies that gradient exploding in the classifier can cause an undesirable collapse in early training, and projects input vectors onto a unit hypersphere can resolve the problem, and proposes the Data-to-Data Cross-Entropy loss (D2D-CE) to exploit relational information in theclass-labeled dataset. Expand
Hierarchical Transformers Are More Efficient Language Models
TLDR
Hourglass is created - a hierarchical Transformer language model that improves language modeling efficiency on the widely studied enwik8 benchmark and sets new state-of-the-art for Transformer models on the ImageNet32 generation task. Expand
STransGAN: An Empirical Study on Transformer in GANs
  • Rui Xu, Xiangyu Xu, Kai Chen, Bolei Zhou, Chen Change Loy
  • Computer Science
  • ArXiv
  • 2021
TLDR
This study leads to a new design of Transformers in GAN, a convolutional neural network (CNN)-free generator termed as STrans-G, which achieves competitive results in both unconditional and conditional image generations. Expand

References

SHOWING 1-10 OF 81 REFERENCES
A Variational Perspective on Diffusion-Based Generative Models and Score Matching
TLDR
This work approaches the (continuous-time) generative diffusion directly and derives a variational framework for likelihood estimation, which includes continuous-time normalizing flows as a special case, and can be seen as an infinitely deep variational autoencoder. Expand
Gotta Go Fast When Generating Data with Score-Based Models
TLDR
This work carefully devise an SDE solver with adaptive step sizes tailored to score-based generative models piece by piece, which generates data 2 to 10 times faster than EM while achieving better or equal sample quality. Expand
Variational Diffusion Models
TLDR
A family of diffusion-based generative models that obtain state-of-the-art likelihoods on standard image density estimation benchmarks are introduced, and it is shown how to turn the model into a bits-back compression scheme, and demonstrate lossless compression rates close to the theoretical optimum. Expand
Improved Denoising Diffusion Probabilistic Models
TLDR
This work shows that with a few simple modifications, DDPMs can also achieve competitive log-likelihoods while maintaining high sample quality, and finds that learning variances of the reverse diffusion process allows sampling with an order of magnitude fewer forward passes with a negligible difference in sample quality. Expand
Generative Modeling by Estimating Gradients of the Data Distribution
TLDR
A new generative model where samples are produced via Langevin dynamics using gradients of the data distribution estimated with score matching, which allows flexible model architectures, requires no sampling during training or the use of adversarial methods, and provides a learning objective that can be used for principled model comparisons. Expand
Cascaded Diffusion Models for High Fidelity Image Generation
TLDR
It is shown that conditioning augmentation prevents compounding error during sampling in a cascaded model, helping to train cascading pipelines achieving FID scores of 1.48 at 64×64, 3.52 at 128×128 and 4.88 at 256×256 resolutions, outperforming BigGAN-deep. Expand
Score-based Generative Modeling in Latent Space
TLDR
The Latent Score-based Generative Model (LSGM) is proposed, a novel approach that trains SGMs in a latent space, relying on the variational autoencoder framework, and achieves state-of-the-art likelihood on the binarized OMNIGLOT dataset. Expand
Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design
TLDR
Flow++ is proposed, a new flow-based model that is now the state-of-the-art non-autoregressive model for unconditional density estimation on standard image benchmarks, and has begun to close the significant performance gap that has so far existed between autoregressive models and flow- based models. Expand
Sliced Score Matching: A Scalable Approach to Density and Score Estimation
TLDR
It is demonstrated that sliced score matching can learn deep energy-based models effectively, and can produce accurate score estimates for applications such as variational inference with implicit distributions and training Wasserstein Auto-Encoders. Expand
Diffusion Models Beat GANs on Image Synthesis
TLDR
It is shown that diffusion models can achieve image sample quality superior to the current state-of-the-art generative models, and classifier guidance combines well with upsampling diffusion models, further improving FID to 3.85 on ImageNet 512× 512. Expand
...
1
2
3
4
5
...