# TLDR: Twin Learning for Dimensionality Reduction

@article{Kalantidis2021TLDRTL, title={TLDR: Twin Learning for Dimensionality Reduction}, author={Yannis Kalantidis and Carlos Lassance and Jon Almaz{\'a}n and Diane Larlus}, journal={ArXiv}, year={2021}, volume={abs/2110.09455} }

Dimensionality reduction methods are unsupervised approaches which learn low-dimensional spaces where some properties of the initial space, typically the notion of “neighborhood”, are preserved. Such methods usually require propagation on large k -NN graphs or complicated optimization solvers. On the other hand, self-supervised learning approaches, typically used to learn representations from scratch, rely on simple and more scalable frameworks for learning. In this paper, we propose TLDR , a…

## 4 Citations

### Domain Adaptation for Memory-Efficient Dense Retrieval

- Computer ScienceArXiv
- 2022

It is shown that binary embedding models like BPR and JPQ can perform signif-icantly worse than baselines once there is a domain-shift involved, and a modi-cation to the training procedure is proposed and combined with a corpus speciﬁc generative procedure which allow the adaptation of BPRand JPQ to any corpus without requiring labeled training data.

### Unsupervised visualization of image datasets using contrastive learning

- Computer ScienceArXiv
- 2022

Visualization methods based on the nearest neighbor graph, such as t -SNE or UMAP, are widely used for visualizing high-dimensional data. Yet, these approaches only produce meaningful results if the…

### Granularity-aware Adaptation for Image Retrieval over Multiple Tasks

- Computer ScienceECCV
- 2022

The unsupervised Grappa model improves the zero-shot performance of a state-of-the-art self-supervised learning model, and in some places reaches or improves over a task label-aware oracle that selects the most fitting pseudo-granularity per task.

### Barlow constrained optimization for Visual Question Answering

- Computer ScienceArXiv
- 2022

A novel regularization for VQA models, Constrained Optimization using Barlow’s theory (COB), that improves the information content of the joint space by minimizing the redundancy and reduces the correlation between the learned feature components and thereby disentangles semantic concepts.

## References

SHOWING 1-10 OF 79 REFERENCES

### Learning with Neighbor Consistency for Noisy Labels

- Computer Science2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2022

This work presents a method for learning from noisy labels that leverages similarities between training examples in feature space, encouraging the prediction of each example to be similar to its nearest neighbours.

### Whitening for Self-Supervised Representation Learning

- Computer ScienceICML
- 2021

This paper proposes a different direction and a new loss function for self-supervised learning which is based on the whitening of the latent-space features and empirically shows that this loss accelerates self- supervised training and the learned representations are much more effective for downstream tasks than previously published work.

### Mining on Manifolds: Metric Learning Without Labels

- Computer Science, Mathematics2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018

A novel unsupervised framework for hard training example mining and models are on par or are outperforming prior models that are fully or partially supervised for fine-grained classification and particular object retrieval.

### Dimensionality Reduction by Learning an Invariant Mapping

- Computer Science2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)
- 2006

This work presents a method - called Dimensionality Reduction by Learning an Invariant Mapping (DrLIM) - for learning a globally coherent nonlinear function that maps the data evenly to the output manifold.

### Sampling Matters in Deep Embedding Learning

- Computer Science2017 IEEE International Conference on Computer Vision (ICCV)
- 2017

This paper proposes distance weighted sampling, which selects more informative and stable examples than traditional approaches, and shows that a simple margin based loss is sufficient to outperform all other loss functions.

### Unsupervised Learning of Visual Features by Contrasting Cluster Assignments

- Computer ScienceNeurIPS
- 2020

This paper proposes an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons, and uses a swapped prediction mechanism where it predicts the cluster assignment of a view from the representation of another view.

### Exploring Simple Siamese Representation Learning

- Computer Science2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2021

Surprising empirical results are reported that simple Siamese networks can learn meaningful representations even using none of the following: (i) negative sample pairs, (ii) large batches, (iii) momentum encoders.

### VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning

- Computer ScienceICLR
- 2022

This paper introduces VICReg (Variance-Invariance-Covariance Regularization), a method that explicitly avoids the collapse problem with a simple regularization term on the variance of the embeddings along each dimension individually.

### Learning Local Feature Descriptors Using Convex Optimisation

- Computer ScienceIEEE Transactions on Pattern Analysis and Machine Intelligence
- 2014

It is shown that learning the pooling regions for the descriptor can be formulated as a convex optimisation problem selecting the regions using sparsity, and an extension of the learning formulations to a weakly supervised case, which allows us to learn the descriptors from unannotated image collections.

### Unsupervised Feature Learning via Non-parametric Instance Discrimination

- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018

This work forms this intuition as a non-parametric classification problem at the instance-level, and uses noise-contrastive estimation to tackle the computational challenges imposed by the large number of instance classes.