Corpus ID: 13691497

SPG-Net: Segmentation Prediction and Guidance Network for Image Inpainting

@inproceedings{Song2018SPGNetSP,
  title={SPG-Net: Segmentation Prediction and Guidance Network for Image Inpainting},
  author={Yuhang Song and Chao Yang and Yeji Shen and Peng Wang and Qin Huang and C.-C. Jay Kuo},
  booktitle={BMVC},
  year={2018}
}
In this paper, we focus on image inpainting task, aiming at recovering the missing area of an incomplete image given the context information. Recent development in deep generative models enables an efficient end-to-end framework for image synthesis and inpainting tasks, but existing methods based on generative models don't exploit the segmentation information to constrain the object shapes, which usually lead to blurry results on the boundary. To tackle this problem, we propose to introduce the… Expand
Deep Generative Model for Image Inpainting with Local Binary Pattern Learning and Spatial Attention
TLDR
This work proposes a new end-to-end, two-stage (coarse- to-fine) generative model through combining a local binary pattern (LBP) learning network with an actual inpainting network, designed to accurately predict the structural information of the missing region. Expand
Coherent Semantic Attention for Image Inpainting
TLDR
This work investigates the human behavior in repairing pictures and proposes a fined deep generative model-based approach with a novel coherent semantic attention (CSA) layer, which can not only preserve contextual structure but also make more effective predictions of missing parts by modeling the semantic relevance between the holes features. Expand
EdgeConnect: Structure Guided Image Inpainting using Edge Prediction
TLDR
This work proposes a two-stage model that separates the inpainting problem into structure prediction and image completion, similar to sketch art, and demonstrates that this approach outperforms current state-of-the-art techniques quantitatively and qualitatively. Expand
Position and Channel Attention for Image Inpainting by Semantic Structure
  • Jingjun Qiu, Yan Gao
  • Computer Science
  • 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI)
  • 2020
TLDR
A two-stage adversarial model is proposed: introduce unsupervised semantic structure guidance and the attention model with location information and channel information strengthens the model's long-range contextual information and multi-scale context information fusion capabilities. Expand
Semantic-SCA: Semantic Structure Image Inpainting With the Spatial-Channel Attention
TLDR
A two-stage adversarial model to further improve the accuracy of the structure and details of image inpainting and is evaluated over the publicly available datasets CelebA, Places2, and Paris StreetView. Expand
Semantic-Guided Inpainting Network for Complex Urban Scenes Manipulation
TLDR
A novel deep learning model is proposed to alter a complex urban scene by removing a user-specified portion of the image and coherently inserting a new object (e.g. car or pedestrian) in that scene by leveraging the semantic segmentation to model the content and structure of theimage, and learn the best shape and location of the object to insert. Expand
Ahff-Net: Adaptive Hierarchical Feature Fusion Network For Image Inpainting
TLDR
An adaptive hierarchical feature fusion network (AHFF-Net) that achieves the state-of-the-art consistently on the Paris StreetView and Places365-Standard datasets with three shapes of masks. Expand
SECI-GAN: Semantic and Edge Completion for dynamic objects removal
TLDR
The SECI-GAN architecture is proposed, an architecture that jointly exploits the high-level cues extracted by semantic segmentation and the fine-grained details captured by edge extraction to condition the image inpainting process and is evaluated on the Cityscapes dataset. Expand
Uncertainty-Aware Semantic Guidance and Estimation for Image Inpainting
TLDR
A SEmantic GUidance and Estimation Network (SeGuE-Net) that iteratively evaluates the uncertainty of inpainted visual contents based on pixel-wise semantic inference and optimize structural priors and inpainted contents alternatively is proposed. Expand
Image Editing via Segmentation Guided Self-Attention Network
TLDR
A deep image editing method based on a self-attention network which copies information for each of the small patches from distant spatial locations, which achieves better performance, is flexible for different purposes, and is fast for implementation. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 46 REFERENCES
Semantic Image Inpainting with Deep Generative Models
TLDR
A novel method for semantic image inpainting, which generates the missing content by conditioning on the available data, and successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods. Expand
Image Inpainting using Multi-Scale Feature Image Translation
TLDR
This work proposes a learning-based approach to generate visually coherent completion given a high-resolution image with missing components and shows that it not only generates results of comparable or better visual quality, but are orders of magnitude faster than previous state-of-the-art methods. Expand
Generative Image Inpainting with Contextual Attention
TLDR
This work proposes a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. Expand
High-Resolution Image Inpainting Using Multi-scale Neural Patch Synthesis
TLDR
This work proposes a multi-scale neural patch synthesis approach based on joint optimization of image content and texture constraints, which not only preserves contextual structures but also produces high-frequency details by matching and adapting patches with the most similar mid-layer feature correlations of a deep classification network. Expand
Image Inpainting using Block-wise Procedural Training with Annealed Adversarial Counterpart
TLDR
This work presents a new approach to address the difficulty of training a very deep generative model to synthesize high-quality photo-realistic inpainting, and introduces a novel block-wise procedural training scheme to stabilize the training while the network depth is increased. Expand
Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
TLDR
This work extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries and applies the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. Expand
Context Encoders: Feature Learning by Inpainting
TLDR
It is found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures, and can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods. Expand
Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs
TLDR
This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Expand
RefineNet: Multi-path Refinement Networks for High-Resolution Semantic Segmentation
TLDR
RefineNet is presented, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections and introduces chained residual pooling, which captures rich background context in an efficient manner. Expand
Image Inpainting for Irregular Holes Using Partial Convolutions
TLDR
This work proposes the use of partial convolutions, where the convolution is masked and renormalized to be conditioned on only valid pixels, and outperforms other methods for irregular masks. Expand
...
1
2
3
4
5
...