Learn More
Deep neural networks are able to learn powerful representations from large quantities of labeled input data, however they cannot always generalize well across changes in input distributions. Domain adaptation algorithms have been proposed to compensate for the degradation in performance due to domain shift. In this paper, we address the case when the target(More)
Unlike human learning, machine learning often fails to handle changes between training (source) and test (target) input distributions. Such domain shifts, common in practical scenarios , severely damage the performance of conventional machine learning methods. Supervised domain adaptation methods have been proposed for the case when the target data have(More)
Crowdsourced 3D CAD models are easily accessible online, and can potentially generate an infinite number of training images for almost any object category. We show that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well(More)
To produce images that are suitable for display, tone-mapping is widely used in digital cameras to map linear color measurements into narrow gamuts with limited dynamic range. This introduces non-linear distortion that must be undone, through a radiometric calibration process, before computer vision systems can analyze such photographs radiometrically. This(More)
Figure 1: (a) Applying a linear classifier w learned by LDA to source data x is equivalent to (b) applying classifierˆw = S −1/2 w to decorrelated points S −1/2 x. (c) However, target points u may still be correlated after S −1/2 u, hurting performance. (d) Our method uses target-specific covari-ance to obtain properly decorrelatedû. Abstract. The most(More)
Datasets power computer vison research and drive breakthroughs. Larger and larger datasets are needed to better utilize the exponentially increasing computing power. However, datasets generation is both time consuming and expensive as human beings are required for image labelling. Human labelling cannot scale well. How can we generate larger image datasets(More)
Deep convolutional neural networks learn extremely powerful image representations, yet most of that power is hidden in the millions of deep-layer parameters. What exactly do these parameters represent? Recent work has started to analyse CNN representations, finding that, e.g., they are invariant to some 2D transformations Fischer et al. (2014), but are(More)
In this chapter, we present CORrelation ALignment (CORAL), a simple yet effective method for unsupervised domain adaptation. CORAL minimizes domain shift by aligning the second-order statistics of source and target distributions , without requiring any target labels. In contrast to subspace manifold methods , it aligns the original feature distributions of(More)