• Corpus ID: 233033860

Optical Flow Dataset Synthesis from Unpaired Images

@article{Wlchli2021OpticalFD,
  title={Optical Flow Dataset Synthesis from Unpaired Images},
  author={Adrian W{\"a}lchli and Paolo Favaro},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.02615}
}
The estimation of optical flow is an ambiguous task due to the lack of correspondence at occlusions, shadows, reflections, lack of texture and changes in illumination over time. Thus, unsupervised methods face major challenges as they need to tune complex cost functions with several terms designed to handle each of these sources of ambiguity. In contrast, supervised methods avoid these challenges altogether by relying on explicit ground truth optical flow obtained directly from synthetic or… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 37 REFERENCES
UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss
TLDR
This work designs an unsupervised loss based on occlusion-aware bidirectional flow estimation and the robust census transform to circumvent the need for ground truth flow, enabling generic pre-training of supervised networks for datasets with limited amounts of ground truth.
Back to Basics: Unsupervised Learning of Optical Flow via Brightness Constancy and Motion Smoothness
TLDR
An unsupervised approach to train a convnet end-to-end for predicting optical flow between two images using a loss function that combines a data term that measures photometric constancy over time with a spatial term that models the expected variation of flow across the image.
Unsupervised Learning of Multi-Frame Optical Flow with Occlusions
TLDR
This paper exploits the minimal configuration of three frames to strengthen the photometric loss and explicitly reason about occlusions and demonstrates that their multi-frame, occlusion-sensitive formulation outperforms existing unsupervised two-frame methods and even produces results on par with some fully supervised methods.
A Lightweight Optical Flow CNN —Revisiting Data Fidelity and Regularization
TLDR
LiteFlowNet2 is built on the foundation laid by conventional methods and resembles the corresponding roles as data fidelity and regularization in variational methods and provides high flow estimation accuracy through early correction with seamless incorporation of descriptor matching.
Learning by Analogy: Reliable Supervision From Transformations for Unsupervised Optical Flow Estimation
TLDR
This work twists the general unsupervised learning pipeline by running another forward pass with transformed data from augmentation, along with using transformed predictions of original data as the self-supervision signal, and introduces a lightweight network with multiple frames by a highly-shared flow decoder.
Volumetric Correspondence Networks for Optical Flow
TLDR
Several simple modifications that dramatically simplify the use of volumetric layers are introduced that significantly improve accuracy over SOTA on standard benchmarks while being significantly easier to work with - training converges in 10X fewer iterations, and most importantly, the networks generalize across correspondence tasks.
FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks
TLDR
The concept of end-to-end learning of optical flow is advanced and it work really well, and faster variants that allow optical flow computation at up to 140fps with accuracy matching the original FlowNet are presented.
A Naturalistic Open Source Movie for Optical Flow Evaluation
TLDR
A new optical flow data set derived from the open source 3D animated short film Sintel is introduced, which has important features not present in the popular Middlebury flow evaluation: long sequences, large motions, specular reflections, motion blur, defocus blur, and atmospheric effects.
SelFlow: Self-Supervised Learning of Optical Flow
We present a self-supervised learning approach for optical flow. Our method distills reliable flow estimations from non-occluded pixels, and uses these predictions as ground truth to learn optical
Bridging Stereo Matching and Optical Flow via Spatiotemporal Correspondence
TLDR
This paper proposes a single and principled network to jointly learn spatiotemporal correspondence for stereo matching and flow estimation, with a newly designed geometric connection as the unsupervised signal for temporally adjacent stereo pairs.
...
...