• Corpus ID: 239016330

TLDR: Twin Learning for Dimensionality Reduction

@article{Kalantidis2021TLDRTL,
  title={TLDR: Twin Learning for Dimensionality Reduction},
  author={Yannis Kalantidis and Carlos Lassance and Jon Almaz{\'a}n and Diane Larlus},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.09455}
}
Dimensionality reduction methods are unsupervised approaches which learn low-dimensional spaces where some properties of the initial space, typically the notion of “neighborhood”, are preserved. Such methods usually require propagation on large k -NN graphs or complicated optimization solvers. On the other hand, self-supervised learning approaches, typically used to learn representations from scratch, rely on simple and more scalable frameworks for learning. In this paper, we propose TLDR , a… 

Figures and Tables from this paper

Domain Adaptation for Memory-Efficient Dense Retrieval

It is shown that binary embedding models like BPR and JPQ can perform signif-icantly worse than baselines once there is a domain-shift involved, and a modi-cation to the training procedure is proposed and combined with a corpus specific generative procedure which allow the adaptation of BPRand JPQ to any corpus without requiring labeled training data.

Unsupervised visualization of image datasets using contrastive learning

Visualization methods based on the nearest neighbor graph, such as t -SNE or UMAP, are widely used for visualizing high-dimensional data. Yet, these approaches only produce meaningful results if the

Granularity-aware Adaptation for Image Retrieval over Multiple Tasks

The unsupervised Grappa model improves the zero-shot performance of a state-of-the-art self-supervised learning model, and in some places reaches or improves over a task label-aware oracle that selects the most fitting pseudo-granularity per task.

Barlow constrained optimization for Visual Question Answering

A novel regularization for VQA models, Constrained Optimization using Barlow’s theory (COB), that improves the information content of the joint space by minimizing the redundancy and reduces the correlation between the learned feature components and thereby disentangles semantic concepts.

References

SHOWING 1-10 OF 79 REFERENCES

Learning with Neighbor Consistency for Noisy Labels

This work presents a method for learning from noisy labels that leverages similarities between training examples in feature space, encouraging the prediction of each example to be similar to its nearest neighbours.

Whitening for Self-Supervised Representation Learning

This paper proposes a different direction and a new loss function for self-supervised learning which is based on the whitening of the latent-space features and empirically shows that this loss accelerates self- supervised training and the learned representations are much more effective for downstream tasks than previously published work.

Mining on Manifolds: Metric Learning Without Labels

A novel unsupervised framework for hard training example mining and models are on par or are outperforming prior models that are fully or partially supervised for fine-grained classification and particular object retrieval.

Dimensionality Reduction by Learning an Invariant Mapping

This work presents a method - called Dimensionality Reduction by Learning an Invariant Mapping (DrLIM) - for learning a globally coherent nonlinear function that maps the data evenly to the output manifold.

Sampling Matters in Deep Embedding Learning

This paper proposes distance weighted sampling, which selects more informative and stable examples than traditional approaches, and shows that a simple margin based loss is sufficient to outperform all other loss functions.

Unsupervised Learning of Visual Features by Contrasting Cluster Assignments

This paper proposes an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons, and uses a swapped prediction mechanism where it predicts the cluster assignment of a view from the representation of another view.

Exploring Simple Siamese Representation Learning

  • Xinlei ChenKaiming He
  • Computer Science
    2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2021
Surprising empirical results are reported that simple Siamese networks can learn meaningful representations even using none of the following: (i) negative sample pairs, (ii) large batches, (iii) momentum encoders.

VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning

This paper introduces VICReg (Variance-Invariance-Covariance Regularization), a method that explicitly avoids the collapse problem with a simple regularization term on the variance of the embeddings along each dimension individually.

Learning Local Feature Descriptors Using Convex Optimisation

It is shown that learning the pooling regions for the descriptor can be formulated as a convex optimisation problem selecting the regions using sparsity, and an extension of the learning formulations to a weakly supervised case, which allows us to learn the descriptors from unannotated image collections.

Unsupervised Feature Learning via Non-parametric Instance Discrimination

This work forms this intuition as a non-parametric classification problem at the instance-level, and uses noise-contrastive estimation to tackle the computational challenges imposed by the large number of instance classes.
...