• Corpus ID: 220546134

Image De-Quantization Using Generative Models as Priors

@article{Basioti2020ImageDU,
  title={Image De-Quantization Using Generative Models as Priors},
  author={Kalliopi Basioti and George V. Moustakides},
  journal={ArXiv},
  year={2020},
  volume={abs/2007.07923}
}
Image quantization is used in several applications aiming in reducing the number of available colors in an image and therefore its size. De-quantization is the task of reversing the quantization effect and recovering the original multi-chromatic level image. Existing techniques achieve de-quantization by imposing suitable constraints on the ideal image in order to make the recovery problem feasible since it is otherwise ill-posed. Our goal in this work is to develop a de-quantization mechanism… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 28 REFERENCES
Image Restoration from Parametric Transformations using Generative Models
TLDR
This approach, by combining maximum a-posteriori probability with maximum likelihood estimation, is capable of restoring images that are distorted by transformations even when the latter contain unknown parameters.
Semantic Image Inpainting with Deep Generative Models
TLDR
A novel method for semantic image inpainting, which generates the missing content by conditioning on the available data, and successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods.
Image Restoration with Deep Generative Models
TLDR
This work proposes to design the image prior in a data-driven manner, and learns it using deep generative models, and demonstrates that this learned prior can be applied to many image restoration problems using an unified framework.
Compressed Sensing using Generative Models
TLDR
This work shows how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all, and proves that, if G is L-Lipschitz, then roughly O(k log L) random Gaussian measurements suffice for an l2/l2 recovery guarantee.
An Augmented Lagrangian Approach to the Constrained Optimization Formulation of Imaging Inverse Problems
TLDR
This paper proposes a new efficient algorithm to handle one class of constrained problems (often known as basis pursuit denoising) tailored to image recovery applications and shows that the proposed algorithm is a strong contender for the state-of-the-art.
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a
Maximal Correlation: An Alternative Criterion for Training Generative Networks
TLDR
Under ideal conditions this non-adversarial approach is shown to achieve the same goal as the existing adversarial methods and is developed for a general optimization problem involving nonlinear functions of expectations.
Progressive Growing of GANs for Improved Quality, Stability, and Variation
TLDR
A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality.
Fast and Accurate Matrix Completion via Truncated Nuclear Norm Regularization
TLDR
This paper proposes to achieve a better approximation to the rank of matrix by truncated nuclear norm, which is given by the nuclear norm subtracted by the sum of the largest few singular values, and develops a novel matrix completion algorithm by minimizing the Truncated Nuclear Norm.
Tutorial on Variational Autoencoders
TLDR
This tutorial introduces the intuitions behind VAEs, explains the mathematics behind them, and describes some empirical behavior.
...
...