Very Long Natural Scenery Image Prediction by Outpainting
@article{Yang2019VeryLN, title={Very Long Natural Scenery Image Prediction by Outpainting}, author={Zongxin Yang and Jian Dong and Ping Liu and Yi Yang and Shuicheng Yan}, journal={2019 IEEE/CVF International Conference on Computer Vision (ICCV)}, year={2019}, pages={10560-10569} }
Comparing to image inpainting, image outpainting receives less attention due to two challenges in it. The first challenge is how to keep the spatial and content consistency between generated images and original input. The second challenge is how to maintain high quality in generated results, especially for multi-step generations in which generated regions are spatially far away from the initial input. To solve the two problems, we devise some innovative modules, named Skip Horizontal Connection…
Figures and Tables from this paper
44 Citations
Outpainting Natural Scenery Images by Fusing Forecasting Information
- Computer ScienceJournal of Physics: Conference Series
- 2022
A novel Multi-view Recurrent Content Transfer module is embedded into an Encoder-Decoder architecture for long-range all-side image outpainting and a multi-head attention mechanism is leveraged to fuse information from different representation sub-spaces at different positions to enhance the consistency of generated images and the original input.
Boosting Image Outpainting with Semantic Layout Prediction
- Computer ScienceArXiv
- 2021
This work decomposes the outpainting task into two stages, and trains a GAN to extend regions in semantic segmentation domain instead of image domain, which can handle semantic clues more easily and hence works better in complex scenarios.
ReGO: Reference-Guided Outpainting for Scenery Image
- Computer ScienceArXiv
- 2021
This work investigates a principle way to synthesize texture-rich results by borrowing pixels from its neighbors, named Reference-Guided Outpainting (ReGO), which designs an Adaptive Content Selection (ACS) module to transfer the pixel of reference images for texture compensating of the target one.
Generalised Image Outpainting with U-Transformer
- Computer Science, ArtArXiv
- 2022
A novel transformer-based generative adversarial network called U-Transformer able to extend image borders with plausible structure and details even for complicated scenery images is developed and ex-perimentally demonstrated that the proposed method could produce visually appealing results for generalized image outpainting against the state-of-the-art image out Painting approaches.
Sketch-Guided Scenery Image Outpainting
- Computer ScienceIEEE Transactions on Image Processing
- 2021
This work proposes an encoder-decoder based network to conduct sketch-guided outpainting, where two alignment modules are adopted to impose the generated content to be realistic and consistent with the provided sketches.
Image-Adaptive Hint Generation via Vision Transformer for Outpainting
- Computer Science, Art2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)
- 2022
Experiments show that the image-adaptive hint framework, when employed in representative inpainting networks, can consistently improve its performance compared to the other conversion techniques from outpainting to inPainting on SUN and Beach benchmark datasets.
Bridging the Visual Gap: Wide-Range Image Blending
- Computer Science2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2021
An effective deep-learning model is introduced to realize wide-range image blending, where a novel Bidirectional Content Transfer module is proposed to perform the conditional prediction for the feature representation of the intermediate region via recurrent neural networks.
SiENet: Siamese Expansion Network for Image Extrapolation
- Computer ScienceIEEE Signal Processing Letters
- 2020
A novel two-stage siamese adversarial model for image extrapolation, named Siamese Expansion Network (SiENet) is proposed, designed for allowing encoder to predict the unknown content, alleviating the burden of decoder.
Painting Outside as Inside: Edge Guided Image Outpainting via Bidirectional Rearrangement with Progressive Step Learning
- Computer Science2021 IEEE Winter Conference on Applications of Computer Vision (WACV)
- 2021
A novel image outpainting method using bidirectional boundary region rearrangement that generates new images with 360°panoramic characteristics and is compared with other state-of-the-art outPainting and inpainting methods both qualitatively and quantitatively.
In-N-Out: Towards Good Initialization for Inpainting and Outpainting
- Computer ScienceArXiv
- 2021
The empirically show that In-N-Out – which explores the complementary information – effectively takes advantage over the traditional pipelines where only task-specific learning takes place in training, and achieves better results than an existing training approach for outpainting.
References
SHOWING 1-10 OF 34 REFERENCES
Generative Image Inpainting with Contextual Attention
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
This work proposes a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions.
High-Resolution Image Inpainting Using Multi-scale Neural Patch Synthesis
- Computer Science2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2017
This work proposes a multi-scale neural patch synthesis approach based on joint optimization of image content and texture constraints, which not only preserves contextual structures but also produces high-frequency details by matching and adapting patches with the most similar mid-layer feature correlations of a deep classification network.
Context Encoders: Feature Learning by Inpainting
- Computer Science2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016
It is found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures, and can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.
Scribbler: Controlling Deep Image Synthesis with Sketch and Color
- Computer Science2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2017
A deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces is proposed and demonstrates a sketch based image synthesis system which allows users to scribble over the sketch to indicate preferred color for objects.
Deep multi-scale video prediction beyond mean square error
- Computer ScienceICLR
- 2016
This work trains a convolutional network to generate future frames given an input sequence and proposes three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function.
Image-to-Image Translation with Conditional Adversarial Networks
- Computer Science2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2017
Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Globally and locally consistent image completion
- Computer Science, MathematicsACM Trans. Graph.
- 2017
We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary…
Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks
- Computer ScienceNIPS
- 2015
A generative parametric model capable of producing high quality samples of natural images using a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion.
Very Deep Convolutional Networks for Large-Scale Image Recognition
- Computer ScienceICLR
- 2015
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Multi-Scale Context Aggregation by Dilated Convolutions
- Computer ScienceICLR
- 2016
This work develops a new convolutional network module that is specifically designed for dense prediction, and shows that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems.