Neural best-buddies

@article{Aberman2018NeuralB,
  title={Neural best-buddies},
  author={Kfir Aberman and Jing Liao and Mingyi Shi and Dani Lischinski and Baoquan Chen and Daniel Cohen-Or},
  journal={ACM Transactions on Graphics (TOG)},
  year={2018},
  volume={37},
  pages={1 - 14}
}
Correspondence between images is a fundamental problem in computer vision, with a variety of graphics applications. [] Key Method Specifically, starting from the coarsest layer in both hierarchies, we search for Neural Best Buddies (NBB): pairs of neurons that are mutual nearest neighbors. The key idea is then to percolate NBBs through the hierarchy, while narrowing down the search regions at each level and retaining only NBBs with significant activations.

SketchZooms: Deep Multi‐view Descriptors for Matching Line Drawings

TLDR
This paper presents the first attempt to obtain a learned descriptor for dense registration in line drawings by designed descriptors to locally match image pairs where the object of interest belongs to the same semantic category, yet still differ drastically in shape, form, and projection angle.

Cross-Domain Correspondence Learning for Exemplar-Based Image Translation

TLDR
This work proposes to jointly learn the cross-domain correspondence and the image translation, where both tasks facilitate each other and thus can be learned with weak supervision, and demonstrates the effectiveness of the approach in several image translation tasks.

Deep Semantic Feature Matching Using Confidential Correspondence Consistency

TLDR
A novel approach for semantic correspondence is proposed, which is based on deep feature representation, geometric and semantic associations between intra-class objects, and hierarchical matching selection according to the convolutional feature pyramid.

PuppetGAN: Transferring Disentangled Properties from Synthetic to Real Images

TLDR
A model that enables controlled manipulation of visual attributes of real "target" images using only implicit supervision with synthetic "source" exemplars is proposed, which learns a shared low-dimensional representation of input images from both domains in which a property of interest is isolated from other content features of the input.

Image Style Transfer via Multi-Style Geometry Warping

TLDR
This paper combines previous works in a framework that can perform geometric deformation on images using different styles from multiple artists by building an architecture that uses multiple style images and one content image as input.

Multispectral Matching using Conditional Generative Appearance Modeling

TLDR
This work focuses on the problem of finding point correspondences in a multispectral imaging setup and proposes an image transformation, which maps one image modality to the respective target image, conditioned on the data of the original spectral band.

Cross-Domain Image Manipulation by Demonstration

In this work we propose a model that can manipulate individual visual attributes of objects in a real scene using examples of how respective attribute manipulations affect the output of a simulation.

Image Morphing With Perceptual Constraints and STN Alignment

TLDR
A conditional generative adversarial network (GAN) morphing framework operating on a pair of input images is proposed, trained to synthesize frames corresponding to temporal samples along the transformation, and learns a proper shape prior that enhances the plausibility of intermediate frames.

Cross-Domain Cascaded Deep Translation

TLDR
This work translates between the deepest layers of a pre-trained network, where the deep features contain more semantics, and applies the translation between these deep features, in a cascaded, deep-to-shallow, fashion, along the deep feature hierarchy.

References

SHOWING 1-10 OF 69 REFERENCES

Universal Correspondence Network

TLDR
A convolutional spatial transformer to mimic patch normalization in traditional features like SIFT is proposed, which is shown to dramatically boost accuracy for semantic correspondences across intra-class shape variations.

Visual attribute transfer through deep image analogy

TLDR
The technique finds semantically-meaningful dense correspondences between two input images by adapting the notion of "image analogy" with features extracted from a Deep Convolutional Neutral Network for matching, and is called deep image analogy.

Discriminative Learning of Deep Convolutional Feature Point Descriptors

TLDR
This paper uses Convolutional Neural Networks to learn discriminant patch representations and in particular train a Siamese network with pairs of (non-)corresponding patches to develop 128-D descriptors whose euclidean distances reflect patch similarity and can be used as a drop-in replacement for any task involving SIFT.

Understanding deep image representations by inverting them

Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of

Learning Dense Correspondence via 3D-Guided Cycle Consistency

TLDR
It is demonstrated that the end-to-end trained ConvNet supervised by cycle-consistency outperforms state-of-the-art pairwise matching methods in correspondence-related tasks.

Non-rigid dense correspondence with applications for image enhancement

TLDR
The usefulness of the method is demonstrated using three applications for automatic example-based photograph enhancement: adjusting the tonal characteristics of a source image to match a reference, transferring a known mask to a new image, and kernel estimation for image deblurring.

Do Convnets Learn Correspondence?

TLDR
Evidence is presented that convnet features localize at a much finer scale than their receptive field sizes, that they can be used to perform intraclass aligment as well as conventional hand-engineered features, and that they outperform conventional features in keypoint prediction on objects from PASCAL VOC 2011.

SIFT Flow: Dense Correspondence across Scenes and Its Applications

TLDR
SIFT flow is proposed, a method to align an image to its nearest neighbors in a large image corpus containing a variety of scenes, where image information is transferred from the nearest neighbors to a query image according to the dense scene correspondence.

Deep Semantic Feature Matching

  • Nikolai UferB. Ommer
  • Computer Science
    2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
TLDR
A novel method for semantic matching with pre-trained CNN features which is based on convolutional feature pyramids and activation guided feature selection and can be transformed into a dense correspondence field.

DAISY Filter Flow: A Generalized Discrete Approach to Dense Correspondences

TLDR
A novel approach called DAISY filter flow (DFF) is presented, Inspired by the recent PatchMatch Filter technique, that enables efficiently performing dense descriptor-based correspondence field estimation in a generalized high-dimensional label space, which is augmented by scales and rotations.
...