Very Long Natural Scenery Image Prediction by Outpainting

@article{Yang2019VeryLN,
  title={Very Long Natural Scenery Image Prediction by Outpainting},
  author={Zongxin Yang and Jian Dong and Ping Liu and Yi Yang and Shuicheng Yan},
  journal={2019 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2019},
  pages={10560-10569}
}
Comparing to image inpainting, image outpainting receives less attention due to two challenges in it. The first challenge is how to keep the spatial and content consistency between generated images and original input. The second challenge is how to maintain high quality in generated results, especially for multi-step generations in which generated regions are spatially far away from the initial input. To solve the two problems, we devise some innovative modules, named Skip Horizontal Connection… 
Outpainting Natural Scenery Images by Fusing Forecasting Information
TLDR
A novel Multi-view Recurrent Content Transfer module is embedded into an Encoder-Decoder architecture for long-range all-side image outpainting and a multi-head attention mechanism is leveraged to fuse information from different representation sub-spaces at different positions to enhance the consistency of generated images and the original input.
Boosting Image Outpainting with Semantic Layout Prediction
TLDR
This work decomposes the outpainting task into two stages, and trains a GAN to extend regions in semantic segmentation domain instead of image domain, which can handle semantic clues more easily and hence works better in complex scenarios.
ReGO: Reference-Guided Outpainting for Scenery Image
TLDR
This work investigates a principle way to synthesize texture-rich results by borrowing pixels from its neighbors, named Reference-Guided Outpainting (ReGO), which designs an Adaptive Content Selection (ACS) module to transfer the pixel of reference images for texture compensating of the target one.
Generalised Image Outpainting with U-Transformer
TLDR
A novel transformer-based generative adversarial network called U-Transformer able to extend image borders with plausible structure and details even for complicated scenery images is developed and ex-perimentally demonstrated that the proposed method could produce visually appealing results for generalized image outpainting against the state-of-the-art image out Painting approaches.
Sketch-Guided Scenery Image Outpainting
TLDR
This work proposes an encoder-decoder based network to conduct sketch-guided outpainting, where two alignment modules are adopted to impose the generated content to be realistic and consistent with the provided sketches.
Image-Adaptive Hint Generation via Vision Transformer for Outpainting
TLDR
Experiments show that the image-adaptive hint framework, when employed in representative inpainting networks, can consistently improve its performance compared to the other conversion techniques from outpainting to inPainting on SUN and Beach benchmark datasets.
Bridging the Visual Gap: Wide-Range Image Blending
TLDR
An effective deep-learning model is introduced to realize wide-range image blending, where a novel Bidirectional Content Transfer module is proposed to perform the conditional prediction for the feature representation of the intermediate region via recurrent neural networks.
SiENet: Siamese Expansion Network for Image Extrapolation
TLDR
A novel two-stage siamese adversarial model for image extrapolation, named Siamese Expansion Network (SiENet) is proposed, designed for allowing encoder to predict the unknown content, alleviating the burden of decoder.
Painting Outside as Inside: Edge Guided Image Outpainting via Bidirectional Rearrangement with Progressive Step Learning
TLDR
A novel image outpainting method using bidirectional boundary region rearrangement that generates new images with 360°panoramic characteristics and is compared with other state-of-the-art outPainting and inpainting methods both qualitatively and quantitatively.
In-N-Out: Towards Good Initialization for Inpainting and Outpainting
TLDR
The empirically show that In-N-Out – which explores the complementary information – effectively takes advantage over the traditional pipelines where only task-specific learning takes place in training, and achieves better results than an existing training approach for outpainting.
...
...

References

SHOWING 1-10 OF 34 REFERENCES
Generative Image Inpainting with Contextual Attention
TLDR
This work proposes a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions.
High-Resolution Image Inpainting Using Multi-scale Neural Patch Synthesis
TLDR
This work proposes a multi-scale neural patch synthesis approach based on joint optimization of image content and texture constraints, which not only preserves contextual structures but also produces high-frequency details by matching and adapting patches with the most similar mid-layer feature correlations of a deep classification network.
Context Encoders: Feature Learning by Inpainting
TLDR
It is found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures, and can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.
Scribbler: Controlling Deep Image Synthesis with Sketch and Color
TLDR
A deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces is proposed and demonstrates a sketch based image synthesis system which allows users to scribble over the sketch to indicate preferred color for objects.
Deep multi-scale video prediction beyond mean square error
TLDR
This work trains a convolutional network to generate future frames given an input sequence and proposes three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function.
Image-to-Image Translation with Conditional Adversarial Networks
TLDR
Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Globally and locally consistent image completion
We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary
Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks
TLDR
A generative parametric model capable of producing high quality samples of natural images using a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion.
Very Deep Convolutional Networks for Large-Scale Image Recognition
TLDR
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Multi-Scale Context Aggregation by Dilated Convolutions
TLDR
This work develops a new convolutional network module that is specifically designed for dense prediction, and shows that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems.
...
...