Deep Iterative Frame Interpolation for Full-frame Video Stabilization

@article{Choi2020DeepIF,
  title={Deep Iterative Frame Interpolation for Full-frame Video Stabilization},
  author={Jinsoo Choi and In So Kweon},
  journal={ACM Transactions on Graphics (TOG)},
  year={2020},
  volume={39},
  pages={1 - 9}
}
Video stabilization is a fundamental and important technique for higher quality videos. Prior works have extensively explored video stabilization, but most of them involve cropping of the frame boundaries and introduce moderate levels of distortion. We present a novel deep approach to video stabilization that can generate video frames without cropping and low distortion. The proposed framework utilizes frame interpolation techniques to generate in between frames, leading to reduced inter-frame… Expand
Deep Homography-Based Video Stabilization
TLDR
This approach aims at combining the strengths of both Deep Learning and traditional methods: the ability of STNs to estimate motion parameters between two frames and the effectiveness of moving averages to smooth camera paths. Expand
Neural Re-rendering for Full-frame Video Stabilization
TLDR
The authors' approach significantly outperforms representative state-of-the-art video stabilization algorithms on these challenging scenarios and does not suffer from aggressive cropping of frame borders in the stabilized video and can even expand the field of view of the original video. Expand
3D Video Stabilization With Depth Estimation by CNN-Based Optimization
TLDR
The proposed Deep3D Stabilizer takes advantage of the recent self-supervised framework on jointly learning depth and camera ego-motion estimation on raw videos and consistently outperforms the state-of-the-art methods on almost all motion categories. Expand
Distortion-Free Video Stabilization
TLDR
Experimental results show that the strengths of Deep Learning and traditional methods for video stabilization, by combining Spatial Transformer Networks and Exponentially Weighted Moving Averages outperforms the state of the art proposals and one commercial solution in a wide variety of scene contents and video categories. Expand
Adaptively Meshed Video Stabilization
  • M. Zhao, Q. Ling
  • Computer Science
  • IEEE Transactions on Circuits and Systems for Video Technology
  • 2021
TLDR
This paper proposes an adaptively meshed method to stabilize a shaky video based on all of its feature trajectories and an adaptive blocking strategy, which yields better estimation performance than previous works, particularly in challenging videos with large foreground objects or strong parallax. Expand
Deep Sketch-guided Cartoon Video Synthesis
TLDR
A novel framework to produce cartoon videos by fetching the color information from two input keyframes while following the animated motion guided by a user sketch, which can address frames with relatively large motion and also has the flexibility to enable users to control the generated video sequences by editing the sketch guidance. Expand
Video stabilization: Overview, challenges and perspectives
TLDR
The main challenges, practical aspects and mathematical core concepts of the video stabilization techniques are focused on and some new research directions to overcome the limitations of the existing methods are discussed. Expand
Out-of-boundary View Synthesis Towards Full-Frame Video Stabilization
TLDR
This paper proposes a new Outof-boundary View Synthesis (OVS) method, which can be integrated into existing warping-based stabilizers as a plug-and-play module to significantly improve the cropping ratio of the stabilized results. Expand
DUT: Learning Video Stabilization by Simply Watching Unstable Videos
TLDR
To construct a controllable and robust stabilizer, DUT makes the first attempt to stabilize unstable videos by explicitly estimating and smoothing trajectories in an unsupervised deep learning manner, which is composed of a DNN-based keypoint detector and motion estimator to generate grid-based trajectories, and a Dnn-based trajectory smoother to stabilize videos. Expand
Hybrid Neural Fusion for Full-frame Video Stabilization
TLDR
This work analyzes the temporal coherence of the approach and demonstrates additional applications on video completion and FOV expansion and provides an interactive HTML interface to compare the video results with state-of-the-art methods. Expand
...
1
2
...

References

SHOWING 1-10 OF 59 REFERENCES
Full-frame video stabilization with motion inpainting
TLDR
This work proposes a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality and develops a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. Expand
Deep Online Video Stabilization With Multi-Grid Warping Transformation Learning
TLDR
This paper presents a video stabilization technique using a convolutional neural network that does not explicitly represent the camera path and does not use future frames, and is able to handle low-quality videos, such as night-scene videos, watermarked videos, blurry videos, and noisy videos. Expand
Video Frame Interpolation via Adaptive Convolution
TLDR
This paper presents a robust video frame interpolation method that considers pixel synthesis for the interpolated frame as local convolution over two input frames and employs a deep fully convolutional neural network to estimate a spatially-adaptive convolution kernel for each pixel. Expand
Context-Aware Synthesis for Video Frame Interpolation
  • Simon Niklaus, Feng Liu
  • Computer Science
  • 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
  • 2018
TLDR
A context-aware synthesis approach that warps not only the input frames but also their pixel-wise contextual information and uses them to interpolate a high-quality intermediate frame and outperforms representative state-of-the-art approaches. Expand
Deep Video Stabilization Using Adversarial Networks
TLDR
A novel online deep learning framework to learn the stabilization transformation for each unsteady frame, given historical steady frames, composed of a generative network with spatial transformer networks embedded in different layers, and generates a stable frame for the incoming unstable frame by computing an appropriate affine transformation. Expand
Subspace video stabilization
TLDR
This article focuses on the problem of transforming a set of input 2D motion trajectories so that they are both smooth and resemble visually plausible views of the imaged scene, and offers the first method that both achieves high-quality video stabilization and is practical enough for consumer applications. Expand
Video Frame Interpolation via Adaptive Separable Convolution
TLDR
This paper develops a deep fully convolutional neural network that takes two input frames and estimates pairs of 1D kernels for all pixels simultaneously, which allows for the incorporation of perceptual loss to train the neural network to produce visually pleasing frames. Expand
Video stabilization with a depth camera
TLDR
Though the depth image is noisy, incomplete and low resolution, it facilitates both camera motion estimation and frame warping, which make the video stabilization a much well posed problem. Expand
Plane-Based Content Preserving Warps for Video Stabilization
  • Zihan Zhou, Hailin Jin, Y. Ma
  • Mathematics, Computer Science
  • 2013 IEEE Conference on Computer Vision and Pattern Recognition
  • 2013
TLDR
A hybrid approach for novel view synthesis is presented, observing that the texture less regions often correspond to large planar surfaces in the scene, and how the segmentation information can be efficiently obtained and seamlessly integrated into the stabilization framework is demonstrated. Expand
Encoding Shaky Videos by Integrating Efficient Video Stabilization
This paper presents a novel video coding method by integrating video stabilization for shaky videos. By reusing the stabilized motion of feature points and geometric transformations, a betterExpand
...
1
2
3
4
5
...