• Corpus ID: 246442182

Progressive Distillation for Fast Sampling of Diffusion Models

  title={Progressive Distillation for Fast Sampling of Diffusion Models},
  author={Tim Salimans and Jonathan Ho},
Diffusion models have recently shown great promise for generative modeling, outperforming GANs on perceptual quality and autoregressive models at density estimation. A remaining downside is their slow sampling time: generating high quality samples takes many hundreds or thousands of model evaluations. Here we make two contributions to help eliminate this downside: First, we present new parameterizations of diffusion models that provide increased stability when using few sampling steps. Second… 

Accelerating Diffusion Models via Early Stop of the Diffusion Process

This work proposes a principled acceleration strategy, referred to as Early-Stopped DDPM (ES-DDPM), which stops the diffusion process early where only the few initial diffusing steps are considered and the reverse denoising process starts from a non-Gaussian distribution.

Few-Shot Diffusion Models

Few-Shot Diffusion Models (FSDM), a framework for few-shot generation leveraging conditional DDPMs, and how conditioning the model on patch-based input set information improves training convergence is shown.

A Survey on Generative Diffusion Model

A diverse range of advanced techniques to speed up the diffusion models – training schedule, training-free sampling, mixed-modeling, and score & diffusion unification are presented.

Diffusion Models: A Comprehensive Survey of Methods and Applications

A comprehensive review of existing variants of the diffusion models and a thorough investigation into the applications of diffusion models, including computer vision, natural language processing, waveform signal processing, multi-modal modeling, molecular graph generation, time series modeling, and adversarial purification.

How Much is Enough? A Study on Diffusion Times in Score-based Generative Models

This work shows how an auxiliary model can be used to bridge the gap between the ideal and the simulated forward dynamics, followed by a standard reverse diffusion process, and suggests a new method to improve quality and efficiency of both training and sampling, by adopting smaller diffusion times.

Come-Closer-Diffuse-Faster: Accelerating Conditional Diffusion Models for Inverse Problems through Stochastic Contraction

It is shown that starting from Gaussian noise is unnecessary, and starting from a single forward diffusion with better initialization significantly reduces the number of sampling steps in the reverse conditional diffusion.

ProDiff: Progressive Fast Diffusion Model For High-Quality Text-to-Speech

ProDiff parameterizes the denoising model by directly predicting clean data to avoid distinct quality degradation in accelerating sampling, and enables a sampling speed of 24x faster than real-time on a single NVIDIA 2080Ti GPU, making diffusion models practically applicable to text-to-speech synthesis deploy-ment for the first time.

Diffusion Models in Vision: A Survey

A multi-perspective categorization of diffusion models applied in computer vision, including variational auto-encoders, generative adversarial networks, energy-based models, autoregressive models and normalizing models is introduced.

Subspace Diffusion Generative Models

This framework restricts the diffusion via projections onto subspaces as the data distribution evolves toward noise, which improves sample quality and reduces the computational cost of inference for the same number of denoising steps.

gDDIM: Generalized denoising diffusion implicit models

An interpretation of the accelerating effects of DDIM is presented that also explains the advantages of a deterministic sampling scheme over the stochastic one for fast sampling and a small but delicate modification in parameterizing the score network.



Learning to Efficiently Sample from Diffusion Probabilistic Models

This paper introduces an exact dynamic programming algorithm that finds the optimal discrete time schedules for any pre-trained DDPM, and exploits the fact that ELBO can be decomposed into separate KL terms, and discovers the time schedule that maximizes the training ELBO exactly.

Knowledge Distillation in Iterative Generative Models for Improved Sampling Speed

A novel connection between knowledge distillation and image generation is established with a technique that distills a multi-step denoising process into a single step, resulting in a sampling speed similar to other single-step generative models.

Improved Denoising Diffusion Probabilistic Models

This work shows that with a few simple modifications, DDPMs can also achieve competitive log-likelihoods while maintaining high sample quality, and finds that learning variances of the reverse diffusion process allows sampling with an order of magnitude fewer forward passes with a negligible difference in sample quality.

Bilateral Denoising Diffusion Models

Novel bilateral denoising diffusion models (BDDMs) are proposed, which take significantly fewer steps to generate high-quality samples and are efficient, simple to train, and capable of further improving any pre-trained DDPM by optimizing the inference noise schedules.

Denoising Diffusion Implicit Models

Denoising diffusion implicit models (DDIMs) are presented, a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs that can produce high quality samples faster and perform semantically meaningful image interpolation directly in the latent space.

Gotta Go Fast When Generating Data with Score-Based Models

This work carefully devise an SDE solver with adaptive step sizes tailored to score-based generative models piece by piece, which generates data 2 to 10 times faster than EM while achieving better or equal sample quality.

Structured Denoising Diffusion Models in Discrete State-Spaces

D3PMs are diffusionlike generative models for discrete data that generalize the multinomial diffusion model of Hoogeboom et al. by going beyond corruption processes with uniform transition probabilities, and it is shown that the choice of transition matrix is an important design decision that leads to improved results in image and text domains.

Noise Estimation for Generative Diffusion Models

This work presents a simple and versatile learning scheme that can step-by-step adjust those noise parameters, for any given number of steps, while the previous work needs to retune for each number separately.

Variational Diffusion Models

A family of diffusion-based generative models that obtain state-of-the-art likelihoods on standard image density estimation benchmarks, outperforming autoregressive models that have dominated these benchmarks for many years, with often faster optimization.

Generative Modeling by Estimating Gradients of the Data Distribution

A new generative model where samples are produced via Langevin dynamics using gradients of the data distribution estimated with score matching, which allows flexible model architectures, requires no sampling during training or the use of adversarial methods, and provides a learning objective that can be used for principled model comparisons.