Corpus ID: 221738910

GOCor: Bringing Globally Optimized Correspondence Volumes into Your Neural Network

@article{Truong2020GOCorBG,
  title={GOCor: Bringing Globally Optimized Correspondence Volumes into Your Neural Network},
  author={Prune Truong and Martin Danelljan and Luc Van Gool and Radu Timofte},
  journal={ArXiv},
  year={2020},
  volume={abs/2009.07823}
}
The feature correlation layer serves as a key neural network module in numerous computer vision problems that involve dense correspondences between image pairs. It predicts a correspondence volume by evaluating dense scalar products between feature vectors extracted from pairs of locations in two images. However, this point-to-point feature comparison is insufficient when disambiguating multiple similar regions in an image, severely affecting the performance of the end task. We propose GOCor, a… Expand
Learning Accurate Dense Correspondences and When to Trust Them
TLDR
This work aims to estimate a dense flow field relating two images, coupled with a robust pixel-wise confidence map indicating the reliability and accuracy of the prediction, and develops a flexible probabilistic approach that jointly learns the flow prediction and its uncertainty. Expand
LoFTR: Detector-Free Local Feature Matching with Transformers
TLDR
The proposed method, LoFTR, uses self and cross attention layers in Transformer to obtain feature descriptors that are conditioned on both images, and enables the method to produce dense matches in low-texture areas, where feature detectors usually struggle to produce repeatable interest points. Expand
COTR: Correspondence Transformer for Matching Across Images
TLDR
A novel framework for finding correspondences in images based on a deep neural network that, given two images and a query point in one of them, finds its correspondence in the other, yielding a multiscale pipeline able to provide highly-accurate correspondences. Expand
Deep Matching Prior: Test-Time Optimization for Dense Correspondence
TLDR
It is shown that an image pair-specific prior can be captured by solely optimizing the untrained matching networks on an input pair of images, and this framework, dubbed Deep Matching Prior (DMP), is competitive, or even outperforms, against the latest learning-based methods on several benchmarks, even though it requires neither large training data nor intensive learning. Expand
CATs: Cost Aggregation Transformers for Visual Correspondence
TLDR
A novel cost aggregation network, called Cost Aggregation Transformers (CATs), to find dense correspondences between semantically similar images with additional challenges posed by large intra-class appearance and geometric variations, and proposes multi-level aggregation to efficiently capture different semantics from hierarchical feature representations. Expand
LIFE: Lighting Invariant Flow Estimation
TLDR
This work proposes a novel weakly supervised framework LIFE to train a neural network for estimating accurate lighting-invariant flows between image pairs, and shows that LIFE outperforms previous flow learning frameworks by large margins in challenging scenarios, consistently improves feature matching, and benefits downstream tasks. Expand
Multi-scale Matching Networks for Semantic Correspondence
  • Dongyang Zhao, Ziyang Song, Zhenghao Ji, Gangming Zhao, Weifeng Ge, Yizhou Yu
  • Computer Science
  • ArXiv
  • 2021
TLDR
This paper proposes a multiscale matching network that is sensitive to tiny semantic differences between neighboring pixels and builds a top-down feature and matching enhancement scheme that is coupled with the multi-scale hierarchy of deep convolutional neural networks. Expand
PDC-Net+: Enhanced Probabilistic Dense Correspondence Network
TLDR
The Enhanced Probabilistic Dense Correspondence Network, PDC-Net+, is proposed, capable of estimating accurate dense correspondences along with a reliable confidence map, and a flexible probabilistic approach that jointly learns the flow prediction and its uncertainty is developed. Expand
Semantic Correspondence with Transformers
TLDR
A novel cost aggregation network, called Cost Aggregation with Transformers (CATs), to find dense correspondences between semantically similar images with additional challenges posed by large intra-class appearance and geometric variations, and includes appearance affinity modelling to disambiguate the initial correlation maps and multi-level aggregation. Expand
Warp Consistency for Unsupervised Learning of Dense Correspondences
TLDR
W warp consistency loss is proposed, an unsupervised learning objective for dense correspondence regression that is effective even in settings with large appearance and view-point changes and sets a new state-of-the-art on several challenging benchmarks. Expand

References

SHOWING 1-10 OF 72 REFERENCES
Neighbourhood Consensus Networks
TLDR
An end-to-end trainable convolutional neural network architecture that identifies sets of spatially consistent matches by analyzing neighbourhood consensus patterns in the 4D space of all possible correspondences between a pair of images without the need for a global geometric model is developed. Expand
SuperGlue: Learning Feature Matching With Graph Neural Networks
TLDR
SuperGlue is introduced, a neural network that matches two sets of local features by jointly finding correspondences and rejecting non-matchable points and introduces a flexible context aggregation mechanism based on attention, enabling SuperGlue to reason about the underlying 3D scene and feature assignments jointly. Expand
Universal Correspondence Network
TLDR
A convolutional spatial transformer to mimic patch normalization in traditional features like SIFT is proposed, which is shown to dramatically boost accuracy for semantic correspondences across intra-class shape variations. Expand
Volumetric Correspondence Networks for Optical Flow
TLDR
Several simple modifications that dramatically simplify the use of volumetric layers are introduced that significantly improve accuracy over SOTA on standard benchmarks while being significantly easier to work with - training converges in 10X fewer iterations, and most importantly, the networks generalize across correspondence tasks. Expand
A Deep Visual Correspondence Embedding Model for Stereo Matching Costs
TLDR
A novel deep visual correspondence embedding model is trained via Convolutional Neural Network on a large set of stereo images with ground truth disparities, and it is proved that the new measure of pixel dissimilarity outperforms traditional matching costs. Expand
Convolutional Neural Network Architecture for Geometric Matching
TLDR
This work proposes a convolutional neural network architecture for geometric matching based on three main components that mimic the standard steps of feature extraction, matching and simultaneous inlier detection and model parameter estimation, while being trainable end-to-end. Expand
Correspondence Networks With Adaptive Neighbourhood Consensus
TLDR
This paper proposes a convolutional neural network architecture, called adaptive neighbourhood consensus network (ANC-Net), that can be trained end-to-end with sparse key-point annotations, to handle the task of establishing dense visual correspondences between images containing objects of the same category. Expand
PARN: Pyramidal Affine Regression Networks for Dense Semantic Correspondence
TLDR
A deep architecture for dense semantic correspondence, called pyramidal affine regression networks (PARN), that estimates locally-varying affine transformation fields across images and proposes a novel weakly-supervised training scheme that generates progressive supervisions by leveraging a correspondence consistency across image pairs. Expand
Recurrent Transformer Networks for Semantic Correspondence
TLDR
This work presents recurrent transformer networks (RTNs) for obtaining dense correspondences between semantically similar images through an iterative process of estimating spatial transformations between the input images and using these transformations to generate aligned convolutional activations. Expand
Arbicon-Net: Arbitrary Continuous Geometric Transformation Networks for Image Registration
TLDR
An end-to-end trainable deep neural networks, named Arbitrary Continuous Geometric Transformation Networks (Arbicon-Net), to directly predict the dense displacement field for pairwise image alignment and outperforms the previous image alignment techniques in identifying the image correspondences. Expand
...
1
2
3
4
5
...