• Publications
  • Influence
Domain-Adversarial Training of Neural Networks
TLDR
A new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions, which can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. Expand
Unsupervised Domain Adaptation by Backpropagation
TLDR
The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets. Expand
Deep Image Prior
TLDR
It is shown that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, superresolution, and inpainting. Expand
Learning To Count Objects in Images
TLDR
This work focuses on the practically-attractive case when the training images are annotated with dots, and introduces a new loss function, which is well-suited for visual object counting tasks and at the same time can be computed efficiently via a maximum subarray algorithm. Expand
Instance Normalization: The Missing Ingredient for Fast Stylization
TLDR
A small change in the stylization architecture results in a significant qualitative improvement in the generated images, and can be used to train high-performance architectures for real-time image generation. Expand
The devil is in the details: an evaluation of recent feature encoding methods
TLDR
A rigorous evaluation of novel encodings for bag of visual words models by identifying both those aspects of each method which are particularly important to achieve good performance, and those aspects which are less critical, which allows a consistent comparative analysis of these encoding methods. Expand
Aggregating Local Deep Features for Image Retrieval
TLDR
This paper shows that deep features and traditional hand-engineered features have quite different distributions of pairwise similarities, hence existing aggregation methods have to be carefully re-evaluated and reveals that in contrast to shallow features, the simple aggregation method based on sum pooling provides the best performance for deep convolutional features. Expand
Optimizing Binary MRFs via Extended Roof Duality
TLDR
An efficient implementation of the "probing" technique is discussed, which simplifies the MRF while preserving the global optimum, and a new technique which takes an arbitrary input labeling and tries to improve its energy is presented. Expand
Class-specific Hough forests for object detection
TLDR
It is demonstrated that Hough forests improve the results of the Hough-transform object detection significantly and achieve state-of-the-art performance for several classes and datasets. Expand
Neural Codes for Image Retrieval
TLDR
It is established that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net), and the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time. Expand
...
1
2
3
4
5
...