• Corpus ID: 236956494

BIGRoC: Boosting Image Generation via a Robust Classifier

  title={BIGRoC: Boosting Image Generation via a Robust Classifier},
  author={Roy Ganz and Michael Elad},
The interest of the machine learning community in image synthesis has grown significantly in recent years, with the introduction of a wide range of deep generative models and means for training them. In this work, we propose a general model-agnostic technique for improving the image quality and the distribution fidelity of generated images, obtained by any generative model. Our method, termed BIGRoC (Boosting Image Generation via a Robust Classifier), is based on a post-processing procedure via… 

Enhancing Diffusion-Based Image Synthesis with Robust Classifier Guidance

This work defines and training a time-dependent adversarially robust classifier and uses it as guidance for a generative diffusion model, which introduces significantly more intelligible intermediate gradients, better alignment with theoretical findings, as well as improved generation results under several evaluation metrics.

Do Perceptually Aligned Gradients Imply Adversarial Robustness?

A novel objective is developed to directly promote Perceptually Aligned Gradients in training classifiers and examine whether models with such gradients are more robust to adversarial attacks, exposing the surprising bidirectional connection between PAG and robustness.

Threat Model-Agnostic Adversarial Defense using Diffusion Models

The defense relies on an addition of Gaussian i.i.d noise to the attacked image, followed by a pretrained diffusion process – an architecture that performs a stochastic iterative process over a denoising network, yielding a high perceptual quality denoised outcome.

Accelerating Diffusion Sampling with Classifier-based Feature Distillation

This paper distill teacher's sharpened feature distribution into the student with a dataset-independent classifier, making the student focus on those important features to improve performance, and introduces a datasets-oriented loss to further optimize the model.



Diffusion Models Beat GANs on Image Synthesis

It is shown that diffusion models can achieve image sample quality superior to the current state-of-the-art generative models, and classifier guidance combines well with upsampling diffusion models, further improving FID to 3.94 on ImageNet 256 × 256 and 3.85 on imageNet 512 × 512.

Refining Deep Generative Models via Discriminator Gradient Flow

Empirical results demonstrate that DGf low leads to significant improvement in the quality of generated samples for a variety of generative models, outperforming the state-of-the-art Discriminator Optimal Transport (DOT) anddiscriminator Driven Latent Sampling (DDLS) methods.

Large Scale GAN Training for High Fidelity Natural Image Synthesis

It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input.

Improved Techniques for Training GANs

This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.

Analyzing and Improving the Image Quality of StyleGAN

This work redesigns the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images, and thereby redefines the state of the art in unconditional image modeling.

Progressive Growing of GANs for Improved Quality, Stability, and Variation

A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality.

Learning Multiple Layers of Features from Tiny Images

It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.

Deep Residual Learning for Image Recognition

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

cGANs with Projection Discriminator

With this modification, the quality of the class conditional image generation on ILSVRC2012 (ImageNet) 1000-class image dataset is significantly improved and the application to super-resolution was extended and succeeded in producing highly discriminative super- resolution images.

Image Synthesis with a Single (Robust) Classifier

It turns out that adversarial robustness is precisely what the authors need to directly manipulate salient features of the input to demonstrate the utility of robustness in the broader machine learning context.