LTT-GAN: Looking Through Turbulence by Inverting GANs

@article{Mei2021LTTGANLT,
  title={LTT-GAN: Looking Through Turbulence by Inverting GANs},
  author={Kangfu Mei and Vishal M. Patel},
  journal={ArXiv},
  year={2021},
  volume={abs/2112.02379}
}
In many applications of long-range imaging, we are faced with a scenario where a person appearing in the captured imagery is often degraded by atmospheric turbulence. However, restoring such degraded images for face verification is difficult since the degradation causes images to be geometrically distorted and blurry. To mitigate the turbulence effect, in this paper, we propose the first turbulence mitigation method that makes use of visual priors encapsulated by a well-trained GAN. Based on… 

AT-DDPM: Restoring Faces degraded by Atmospheric Turbulence using Denoising Diffusion Probabilistic Models

This paper proposes the first DDPM-based solution for the problem of atmospheric turbulence mitigation, and proposes a fast sampling technique for reducing the inference times for conditional DDPMs.

Thermal to Visible Image Synthesis under Atmospheric Turbulence

In many practical applications of long-range imaging such as biometrics and surveillance, thermal imagining modalities are often used to capture images in low-light and nighttime conditions.

A comparison of different atmospheric turbulence simulation methods for image restoration

This paper evaluates the effectiveness of various turbulence simulation methods on image restoration using six simulations method on a real-world LRFID dataset consisting of face images degraded by turbulence and provides guidance to the researchers and practitioners working in this area to choose the suitable data generation models.

DifFace: Blind Face Restoration with Diffused Error Contraction

This work proposes a novel method named DifFace that is capable of coping with unseen and complex degradations more gracefully without complicated loss designs and is superior to current state-of-the-art methods, especially in cases with severe degradation.

VIDM: Video Implicit Diffusion Models

A video generation method based on diffusion models, where the effects of motion are modeled in an implicit condition manner, which outperforms the state-of-the-art generative adversarial network- based methods by a significant margin in terms of FVD scores as well as perceptible visual quality.

References

SHOWING 1-10 OF 51 REFERENCES

Learning to Restore Images Degraded by Atmospheric Turbulence Using Uncertainty

Atmospheric turbulence can significantly degrade the quality of images acquired by long-range imaging systems by causing spatially and temporally random fluctuations in the index of refraction of the

Maintaining Natural Image Statistics with the Contextual Loss

This paper looks explicitly at the distribution of features in an image and train the network to generate images with natural feature distributions, which reduces by orders of magnitude the number of images required for training and achieves state-of-the-art results on both single-image super-resolution, and high-resolution surface normal estimation.

ATFaceGAN: Single Face Semantic Aware Image Restoration and Recognition From Atmospheric Turbulence

A generative single frame restoration algorithm is proposed which disentangles the blur and deformation due to turbulence and reconstructs a restored image which achieves satisfactory performance for face restoration and face recognition tasks.

Image Processing Using Multi-Code GAN Prior

A novel approach is proposed, called mGANprior, to incorporate the well-trained GANs as effective prior to a variety of image processing tasks, by employing multiple latent codes to generate multiple feature maps at some intermediate layer of the generator and composing them with adaptive channel importance to recover the input image.

GAN Prior Embedded Network for Blind Face Restoration in the Wild

This work proposes a new method by first learning a GAN for high-quality face image generation and embedding it into a U-shaped DNN as a prior decoder, then fine-tuning the GAN prior embedded DNN with a set of synthesized low- quality face images.

Exploiting Deep Generative Prior for Versatile Image Restoration and Manipulation

This work presents an effective way to exploit the image prior captured by a generative adversarial network (GAN) trained on large-scale natural images and allows the generator to be fine-tuned on-the-fly in a progressive manner regularized by feature distance obtained by the discriminator in GAN.

Towards Real-World Blind Face Restoration with Generative Facial Prior

This work proposes GFP-GAN that leverages rich and diverse priors encapsulated in a pretrained face GAN for blind face restoration that achieves superior performance to prior art on both synthetic and real-world datasets.

Learning Warped Guidance for Blind Face Restoration

Experiments show that the GFRNet not only performs favorably against the state-of-the-art image and face restoration methods, but also generates visually photo-realistic results on real degraded facial images.

Subsampled Turbulence Removal Network

A training strategy that is based on a new data augmentation method to model turbulence from a relatively small dataset and a subsampling method to enhance the restoration performance of the presented GAN model are purposeed.

ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks

This work thoroughly study three key components of SRGAN – network architecture, adversarial loss and perceptual loss, and improves each of them to derive an Enhanced SRGAN (ESRGAN), which achieves consistently better visual quality with more realistic and natural textures than SRGAN.
...