• Corpus ID: 15140030

Semantic Image Inpainting with Perceptual and Contextual Losses

@article{Yeh2016SemanticII,
  title={Semantic Image Inpainting with Perceptual and Contextual Losses},
  author={Raymond A. Yeh and Chen Chen and Teck-Yian Lim and Mark A. Hasegawa-Johnson and Minh N. Do},
  journal={ArXiv},
  year={2016},
  volume={abs/1607.07539}
}
In this paper, we propose a novel method for image inpainting based on a Deep Convolutional Generative Adversarial Network (DCGAN. [] Key Method Given a corrupted image with missing values, we use back-propagation on this loss to map the corrupted image to a smaller latent space. The mapped vector is then passed through the generative model to predict the missing content. The proposed framework is evaluated on the CelebA and SVHN datasets for two challenging inpainting tasks with random 80% corruption and…

Figures from this paper

Image Inpainting: A Contextual Consistent and Deep Generative Adversarial Training Approach
TLDR
Experimental results on Paris Street View Dataset show that the combination of context encoder and contextual information could recover more texture-consistent and more high-quality regions, which demonstrates the advantage of the proposed algorithm.
Generative image inpainting with residual attention learning
TLDR
This paper proposes an efficient end-to-end two-stage network based on channel and spatial attention block (CSAB) to adaptively weigh both channel-wise and spatial-wise features focusing on more meaningful information, and generate a locally fine-detailed image.
Image Inpainting using Block-wise Procedural Training with Annealed Adversarial Counterpart
TLDR
This work presents a new approach to address the difficulty of training a very deep generative model to synthesize high-quality photo-realistic inpainting, and introduces a novel block-wise procedural training scheme to stabilize the training while the network depth is increased.
Context-Aware Semantic Inpainting
TLDR
An improved GAN-based framework consists of a fully convolutional design for the generator which helps to better preserve spatial structures and a joint loss function with a revised perceptual loss to capture high-level semantics in the context.
Coherent Semantic Attention for Image Inpainting
TLDR
This work investigates the human behavior in repairing pictures and proposes a fined deep generative model-based approach with a novel coherent semantic attention (CSA) layer, which can not only preserve contextual structure but also make more effective predictions of missing parts by modeling the semantic relevance between the holes features.
A deep network architecture for image inpainting
TLDR
A deep network architecture — Image Inpainting Conditional Generative Adversarial Network (II-CGAN) — based on the deep convolutional neural network (CNN), which directly learns the mapping relationship between the damaged and repaired image detail layers from data.
Generative image inpainting with neural features
TLDR
An image inpainting approach based on generative adversarial networks (GANs) that can make good use of the features information of images, and perform efficient and realistic inPainting is proposed.
Multi-scale Generative Model for Image Completion
TLDR
A multi-scale generative model which can gradually generate novel text to avoid distorted details, and the multi- scale losses can also eliminate blurred edges between inpainting results and the original region is proposed.
Adaptive Image Inpainting
TLDR
This work proposes a distillation based approach for inpainting, where it provides direct feature level supervision for the encoder layers in an adaptive manner and deploys cross and self distillation techniques and discusses the need for a dedicated completion-block in encoder to achieve the distillation target.
Semantic Image Completion and Enhancement using Deep Learning
TLDR
Experimental outcomes show that the proposed approach improves the Peak Signal to Noise ratio and Structural Similarity Index values by 2.45% and 4% respectively, when compared to the recently reported data.
...
...

References

SHOWING 1-10 OF 26 REFERENCES
Context Encoders: Feature Learning by Inpainting
TLDR
It is found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures, and can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.
Image Denoising and Inpainting with Deep Neural Networks
TLDR
A novel approach to low-level vision problems that combines sparse coding and deep networks pre-trained with denoising auto-encoder (DA) is presented and can automatically remove complex patterns like superimposed text from an image, rather than simple patterns like pixels missing at random.
Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks
TLDR
A generative parametric model capable of producing high quality samples of natural images using a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion.
Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis
  • Chuan Li, Michael Wand
  • Computer Science
    2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2016
TLDR
A combination of generative Markov random field models and discriminatively trained deep convolutional neural networks for synthesizing 2D images, yielding results far out of reach of classic generative MRF methods.
Understanding deep image representations by inverting them
Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of
Image inpainting
TLDR
A novel algorithm for digital inpainting of still images that attempts to replicate the basic techniques used by professional restorators, and does not require the user to specify where the novel information comes from.
Sparse Representation for Color Image Restoration
TLDR
This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to state-of-the-art results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper.
Get Out of my Picture! Internet-based Inpainting
TLDR
This paper uses recent advances in viewpoint invariant image search to find other images of the same scene on the Internet to replace large occlusions in photographs, and uses a Markov random field formulation to combine the proposals into a single, occlusion-free result.
Inverting Visual Representations with Convolutional Networks
  • A. Dosovitskiy, T. Brox
  • Computer Science
    2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2016
TLDR
This work proposes a new approach to study image representations by inverting them with an up-convolutional neural network, and applies this method to shallow representations (HOG, SIFT, LBP), as well as to deep networks.
A Neural Algorithm of Artistic Style
TLDR
This work introduces an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality and offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery.
...
...