Semantic Image Inpainting with Deep Generative Models

@article{Yeh2017SemanticII,
  title={Semantic Image Inpainting with Deep Generative Models},
  author={Raymond A. Yeh and Chen Chen and Teck-Yian Lim and Alexander G. Schwing and Mark A. Hasegawa-Johnson and Minh N. Do},
  journal={2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2017},
  pages={6882-6890}
}
  • Raymond A. Yeh, Chen Chen, M. Do
  • Published 26 July 2016
  • Computer Science
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Semantic image inpainting is a challenging task where large missing regions have to be filled based on the available visual data. [] Key Method Given a trained generative model, we search for the closest encoding of the corrupted image in the latent image manifold using our context and prior losses. This encoding is then passed through the generative model to infer the missing content.
Coherent Semantic Attention for Image Inpainting
TLDR
This work investigates the human behavior in repairing pictures and proposes a fined deep generative model-based approach with a novel coherent semantic attention (CSA) layer, which can not only preserve contextual structure but also make more effective predictions of missing parts by modeling the semantic relevance between the holes features.
Semantic Image Inpainting with Progressive Generative Networks
TLDR
This paper proposes an end-to-end framework named progressive generative networks~(PGN), which regards the semantic image inpainting task as a curriculum learning problem, and divides the hole filling process into several different phases and each phase aims to finish a course of the entire curriculum.
A Method of Semantic Image Inpainting with Generative Adversarial Networks
  • Zhe Wang, H. Yin
  • Computer Science
    Proceedings of 2018 Chinese Intelligent Systems Conference
  • 2018
TLDR
A new method of semantic image inpainting based on the generative model with learning the representation of image database is concluded and the performance of the model mostly is good when completing image corrupted with the mask with an area of less than 50%.
Contextual-Based Image Inpainting: Infer, Match, and Translate
TLDR
This work proposes a learning-based approach to generate visually coherent completion given a high-resolution image with missing components and shows that it generates results of better visual quality than previous state-of-the-art methods.
Image Inpainting Based on Generative Adversarial Networks
TLDR
The proposed model can deal with large-scale missing pixels and generate realistic completion results and uses the skip-connection in the generator to improve the prediction power of the model.
An Improved Method for Semantic Image Inpainting with GANs: Progressive Inpainting
TLDR
This paper proposes an improved method named progressive inpainting, which takes a pyramid strategy from a low-resolution image to higher one, with the purpose of getting a clear completed image and reducing the reliance on the training process.
Progressive Semantic Reasoning for Image Inpainting
TLDR
A novel artistic Progressive Semantic Reasoning (PSR) network is proposed, which is composed of three shared parameters from the generation network superposition, and a simple but effective Cross Feature Reconstruction (CFR) strategy is proposed to tradeoff semantic information from different levels.
Structural Knowledge-Guided Feature Inference Network for Image Inpainting
  • Yongqiang Du
  • Computer Science
    International Journal of Circuits, Systems and Signal Processing
  • 2022
TLDR
A structural knowledge-guided framework for image inpainting, which predicts both the edge map and corrupted content at the same time, and captures structural knowledge in the structure estimation branch to guide the content inference in the latent feature space.
Toward semantic image inpainting: where global context meets local geometry
TLDR
A deep semantic inpainting model built upon a generative adversarial network and a dense U-Net network that helps achieve feature reuse while avoiding feature explosion along the upsampling path of the U- net.
Generative Image Inpainting with Contextual Attention
TLDR
This work proposes a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions.
...
...

References

SHOWING 1-10 OF 45 REFERENCES
Context Encoders: Feature Learning by Inpainting
TLDR
It is found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures, and can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.
Image Denoising and Inpainting with Deep Neural Networks
TLDR
A novel approach to low-level vision problems that combines sparse coding and deep networks pre-trained with denoising auto-encoder (DA) is presented and can automatically remove complex patterns like superimposed text from an image, rather than simple patterns like pixels missing at random.
Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks
TLDR
A generative parametric model capable of producing high quality samples of natural images using a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion.
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
TLDR
SRGAN, a generative adversarial network (GAN) for image super-resolution (SR), is presented, to its knowledge, the first framework capable of inferring photo-realistic natural images for 4x upscaling factors and a perceptual loss function which consists of an adversarial loss and a content loss.
Generative Visual Manipulation on the Natural Image Manifold
TLDR
This paper proposes to learn the natural image manifold directly from data using a generative adversarial neural network, and defines a class of image editing operations, and constrain their output to lie on that learned manifold at all times.
Perceptual Losses for Real-Time Style Transfer and Super-Resolution
TLDR
This work considers image transformation problems, and proposes the use of perceptual loss functions for training feed-forward networks for image transformation tasks, and shows results on image style transfer, where aFeed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time.
Shepard Convolutional Neural Networks
TLDR
This paper draws on Shepard interpolation and design Shepard Convolutional Neural Networks (ShCNN) which efficiently realizes end-to-end trainable TVI operators in the network and shows that by adding only a few feature maps in the new Shepard layers, the network is able to achieve stronger results than a much deeper architecture.
Image Style Transfer Using Convolutional Neural Networks
TLDR
A Neural Algorithm of Artistic Style is introduced that can separate and recombine the image content and style of natural images and provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.
Attribute2Image: Conditional Image Generation from Visual Attributes
TLDR
A layered generative model with disentangled latent variables that can be learned end-to-end using a variational auto-encoder is developed and shows excellent quantitative and visual results in the tasks of attribute-conditioned image reconstruction and completion.
Understanding deep image representations by inverting them
Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of
...
...