SaiNet: Stereo aware inpainting behind objects with generative networks

@article{Gonzalez2022SaiNetSA,
  title={SaiNet: Stereo aware inpainting behind objects with generative networks},
  author={Violeta Men'endez Gonz'alez and Andrew Gilbert and Graeme Phillipson and Stephen Jolly and Simon Hadfield},
  journal={ArXiv},
  year={2022},
  volume={abs/2205.07014}
}
In this work, we present an end-to-end network for stereo-consistent image inpainting with the objective of inpainting large missing regions behind objects. The proposed model consists of an edge-guided UNet-like network using Partial Convolutions. We enforce multi-view stereo consistency by introducing a disparity loss. More impor-tantly, we develop a training scheme where the model is learned from realistic stereo masks representing object occlusions, instead of the more common random masks… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 37 REFERENCES

StructureFlow: Image Inpainting via Structure-Aware Appearance Flow

A two-stage model which splits the inpainting task into two parts: structure reconstruction and texture generation is proposed, which shows superior performance on multiple publicly available datasets.

3D Photography Using Context-Aware Layered Depth Inpainting

A learning-based inpainting model is presented that iteratively synthesizes new local color-and-depth content into the occluded region in a spatial context-aware manner and can be efficiently rendered with motion parallax using standard graphics engines.

CNN-Based Stereoscopic Image Inpainting

The paper is the first to solve the stereoscopic inpainting problem in the framework of CNN and presents an end-to-end network composed of two encoders for independent feature extraction, a feature fusion module for stereo coherent structure prediction, and two decoders to generate a pair of completed images.

Mask-Specific Inpainting with Deep Neural Networks

This work directly learns a mapping from image patches, corrupted by missing pixels, onto complete image patches that is represented as a deep neural network that is automatically trained on a large image data set to exploit the shape information of the missing regions.

Stereoscopic inpainting: Joint color and depth completion from stereo images

A novel algorithm takes stereo images and estimated disparity maps as input and fills in missing color and depth information introduced by occlusions or object removal and demonstrates the effectiveness of the proposed algorithm on several challenging data sets.

SPG-Net: Segmentation Prediction and Guidance Network for Image Inpainting

This paper proposes to introduce the semantic segmentation information, which disentangles the inter-class difference and intra-class variation for image inpainting, which leads to much clearer recovered boundary between semantically different regions and better texture within semantically consistent segments.

Inpainting of Wide-Baseline Multiple Viewpoint Video

This work describes a non-parametric algorithm for multiple-viewpoint video inpainting and demonstrates the removal of large objects on challenging indoor and outdoor MVV exhibiting cluttered, dynamic backgrounds and moving cameras.

Image Inpainting for Irregular Holes Using Partial Convolutions

This work proposes the use of partial convolutions, where the convolution is masked and renormalized to be conditioned on only valid pixels, and outperforms other methods for irregular masks.

Generative Image Inpainting with Contextual Attention

This work proposes a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions.

Context Encoders: Feature Learning by Inpainting

It is found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures, and can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.