Wavelet Transform-assisted Adaptive Generative Modeling for Colorization

@article{Li2022WaveletTA,
  title={Wavelet Transform-assisted Adaptive Generative Modeling for Colorization},
  author={Jin Li and Wanyun Li and Zichen Xu and Yuhao Wang and Qiegen Liu},
  journal={ArXiv},
  year={2022},
  volume={abs/2107.04261}
}
—Unsupervised deep learning has recently demon- strated the promise of producing high-quality samples. While it has tremendous potential to promote the image colorization task, the performance is limited owing to the high-dimension of data manifold and model capability. This study presents a novel scheme that exploits the score-based generative model in wavelet domain to address the issues. By taking advantage of the multi- scale and multi-channel representation via wavelet transform, the… 

Wavelet Diffusion Models are fast and scalable Image Generators

Experimental results on CelebA-HQ, CIFAR-10, LSUN-Church, and STL-10 datasets prove the proposed novel wavelet-based diffusion structure is a steppingstone to offering real-time and high-fidelity diffusion models.

Generative Modeling in Structural-Hankel Domain for Color Image Inpainting

This study intends to a brand-new idea that requires only ten or even fewer samples to construct the low-rank structural-Hankel matrices-assisted score-based generative model (SHGM) for color image inpainting task.

References

SHOWING 1-10 OF 74 REFERENCES

LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop

This work proposes to amplify human effort through a partially automated labeling scheme, leveraging deep learning with humans in the loop, and constructs a new image dataset, LSUN, which contains around one million labeled images for each of 10 scene categories and 20 object categories.

ChromaGAN: Adversarial Picture Colorization with Semantic Class Distribution

Qualitative and quantitative results show the capacity of the proposed method to colorize images in a realistic way achieving state-of-the-art results.

Generative Modeling by Estimating Gradients of the Data Distribution

A new generative model where samples are produced via Langevin dynamics using gradients of the data distribution estimated with score matching, which allows flexible model architectures, requires no sampling during training or the use of adversarial methods, and provides a learning objective that can be used for principled model comparisons.

Coloring With Limited Data: Few-Shot Colorization via Memory Augmented Networks

This work presents a novel memory-augmented colorization model MemoPainter that can produce high-quality colorization with limited data and proposes a novel threshold triplet loss that enables unsupervised training of memory networks without the need for class labels.

Extracting and composing robust features with denoising autoencoders

This work introduces and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern.

Fast Mixing of Multi-Scale Langevin Dynamics under the Manifold Hypothesis

This work demonstrates how the manifold hypothesis allows for the considerable reduction of mixing time, from exponential in the ambient dimension to depending only on the (much smaller) intrinsic dimension of the data.

Colorful Image Colorization

This paper proposes a fully automatic approach to colorization that produces vibrant and realistic colorizations and shows that colorization can be a powerful pretext task for self-supervised feature learning, acting as a cross-channel encoder.

Naturalness Preserved Enhancement Algorithm for Non-Uniform Illumination Images

Experimental results demonstrate that the proposed enhancement algorithm can not only enhance the details but also preserve the naturalness for non-uniform illumination images.

A Connection Between Score Matching and Denoising Autoencoders

A proper probabilistic model for the denoising autoencoder technique is defined, which makes it in principle possible to sample from them or rank examples by their energy, and a different way to apply score matching that is related to learning to denoise and does not require computing second derivatives is suggested.
...