• Corpus ID: 246063957

GeoFill: Reference-Based Image Inpainting of Scenes with Complex Geometry

@article{Zhao2022GeoFillRI,
  title={GeoFill: Reference-Based Image Inpainting of Scenes with Complex Geometry},
  author={Yunhan Zhao and Connelly Barnes and Yuqian Zhou and Eli Shechtman and Sohrab Amirghodsi and Charless C. Fowlkes},
  journal={ArXiv},
  year={2022},
  volume={abs/2201.08131}
}
Reference-guided image inpainting restores image pixels by leveraging the content from another reference image. The previous state-of-the-art, TransFill, warps the source image with multiple homographies, and fuses them together for hole filling. Inspired by structure from motion pipelines and recent progress in monocular depth estimation, we propose a more principled approach that does not require heuristic planar assumptions. We leverage a monocular depth estimate and predict relative pose… 
Towards Unified Keyframe Propagation Models
TLDR
This work presents a two-stream approach, where high-frequency features interact locally and low- frequencies features interact globally, and evaluates it for inpainting tasks, where experiments show that it improves both the propagation of features within a single frame as required for image inPainting, as well as their propagation from keyframes to target frames.

References

SHOWING 1-10 OF 64 REFERENCES
TransFill: Reference-guided Image Inpainting by Merging Multiple Color and Spatial Transformations
TLDR
This paper proposes TransFill, a multi-homography transformed fusion method to fill the hole by referring to another source image that shares scene contents with the target image, and generalizes to user-provided image pairs.
Guidance and Evaluation: Semantic-Aware Image Inpainting for Mixed Scenes
TLDR
This paper proposes a Semantic Guidance and Evaluation Network (SGE-Net) to iteratively update the structural priors and the inpainted image in an interplay framework of semantics extraction and image inpainting, which utilizes semantic segmentation map as guidance in each scale of inPainting.
Guiding Monocular Depth Estimation Using Depth-Attention Volume
TLDR
This paper proposes guiding depth estimation to favor planar structures that are ubiquitous especially in indoor environments by incorporating a non-local coplanarity constraint to the network with a novel attention mechanism called depth-attention volume (DAV).
Depth Map Prediction from a Single Image using a Multi-Scale Deep Network
TLDR
This paper employs two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally, and applies a scale-invariant error to help measure depth relations rather than scale.
High Quality Monocular Depth Estimation via Transfer Learning
TLDR
A convolutional neural network for computing a high-resolution depth map given a single RGB image with the help of transfer learning, which outperforms state-of-the-art on two datasets and also produces qualitatively better results that capture object boundaries more faithfully.
From Big to Small: Multi-Scale Local Planar Guidance for Monocular Depth Estimation
TLDR
This paper proposes a network architecture that utilizes novel local planar guidance layers located at multiple stages in the decoding phase that outperforms the state-of-the-art works with significant margin evaluating on challenging benchmarks.
Foreground-Aware Image Inpainting
  • Wei Xiong, Jiahui Yu, Jiebo Luo
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
This work proposes a foreground-aware image inpainting system that explicitly disentangles structure inference and content completion, and shows that by such disentanglement, the contour completion model predicts reasonable contours of objects, and further substantially improves the performance of image inPainting.
Deeper Depth Prediction with Fully Convolutional Residual Networks
TLDR
A fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps is proposed and a novel way to efficiently learn feature map up-sampling within the network is presented.
Towards Better Generalization: Joint Depth-Pose Learning Without PoseNet
TLDR
A novel system that explicitly disentangles scale from the network estimation, which achieves state-of-the-art results among self-supervised learning-based methods on KITTI Odometry and NYUv2 dataset and presents some interesting findings on the limitation of PoseNet-based relative pose estimation methods in terms of generalization ability.
High-Resolution Image Inpainting Using Multi-scale Neural Patch Synthesis
TLDR
This work proposes a multi-scale neural patch synthesis approach based on joint optimization of image content and texture constraints, which not only preserves contextual structures but also produces high-frequency details by matching and adapting patches with the most similar mid-layer feature correlations of a deep classification network.
...
...