• Corpus ID: 209202376

DeepMeshFlow: Content Adaptive Mesh Deformation for Robust Image Registration

@article{Ye2019DeepMeshFlowCA,
  title={DeepMeshFlow: Content Adaptive Mesh Deformation for Robust Image Registration},
  author={Nianjin Ye and Chuan Wang and Shuaicheng Liu and Lanpeng Jia and Jue Wang and Yongqing Cui},
  journal={ArXiv},
  year={2019},
  volume={abs/1912.05131}
}
Image alignment by mesh warps, such as meshflow, is a fundamental task which has been widely applied in various vision applications(e.g., multi-frame HDR/denoising, video stabilization). Traditional mesh warp methods detect and match image features, where the quality of alignment highly depends on the quality of image features. However, the image features are not robust in occurrence of low-texture and low-light scenes. Deep homography methods, on the other hand, are free from such problem by… 

Figures and Tables from this paper

Depth-Aware Multi-Grid Deep Homography Estimation With Contextual Correlation

TLDR
A contextual correlation layer (CCL) is designed that can efficiently capture the long-range correlation within feature maps and can be flexibly used in a learning framework to predict multi-grid homography from global to local.

TransFill: Reference-guided Image Inpainting by Merging Multiple Color and Spatial Transformations

TLDR
This paper proposes TransFill, a multi-homography transformed fusion method to fill the hole by referring to another source image that shares scene contents with the target image, and generalizes to user-provided image pairs.

VR content creation and exploration with deep learning: A survey

TLDR
Recent research that uses fully convolutional networks and general adversarial networks for VR content creation and exploration is surveyed, and possible future directions in this active and emerging research area are discussed.

References

SHOWING 1-10 OF 33 REFERENCES

Content-Aware Unsupervised Deep Homography Estimation

TLDR
This work proposes an unsupervised deep homography method with a new architecture design that outperforms the state-of-the-art including deep solutions and feature-based solutions.

Dual-Feature Warping-Based Motion Model Estimation

TLDR
This paper proposes a simple and effective approach by considering both keypoint and line segment correspondences as data-term, which not only helps guild to a correct warp in low-texture condition, but also prevents the undesired distortion induced by warping.

MeshFlow: Minimum Latency Online Video Stabilization

TLDR
The quantitative and qualitative evaluations show that the proposed technique for online video stabilization with only one frame latency using a novel MeshFlow motion model can produce comparable results with the state-of-the-art off-line methods.

Content-preserving warps for 3D video stabilization

TLDR
A technique that transforms a video from a hand-held video camera so that it appears as if it were taken with a directed camera motion, and develops algorithms that can effectively recreate dynamic scenes from a single source video.

Deep Image Homography Estimation

TLDR
Two convolutional neural network architectures are presented for HomographyNet: a regression network which directly estimates the real-valued homography parameters, and a classification network which produces a distribution over quantized homographies.

Seamless Video Stitching from Hand‐held Camera Inputs

TLDR
This paper presents the first system to stitch videos captured by hand‐held cameras using CoSLAM system and generates a smooth virtual camera path, which stays in the middle of the original paths.

Fast burst images denoising

TLDR
A fast denoising method that produces a clean image from a burst of noisy images by introducing a lightweight camera motion representation called homography flow and a mechanism of selecting consistent pixels for temporal fusion to handle scene motion during the capture.

Constructing image panoramas using dual-homography warping

This paper describes a method to construct seamless image mosaics of a panoramic scene containing two predominate planes: a distant back plane and a ground plane that sweeps out from the camera's

FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks

TLDR
The concept of end-to-end learning of optical flow is advanced and it work really well, and faster variants that allow optical flow computation at up to 140fps with accuracy matching the original FlowNet are presented.

As-Projective-As-Possible Image Stitching with Moving DLT

TLDR
This work investigates projective estimation under model inadequacies, i.e., when the underpinning assumptions of the projective model are not fully satisfied by the data, and proposes as-projective-as-possible warps that aim to be globally projective, yet allow local non-projectives to account for violations to the assumed imaging conditions.