# TLDR: Twin Learning for Dimensionality Reduction

@article{Kalantidis2021TLDRTL, title={TLDR: Twin Learning for Dimensionality Reduction}, author={Yannis Kalantidis and Carlos Lassance and Jon Almaz{\'a}n and Diane Larlus}, journal={ArXiv}, year={2021}, volume={abs/2110.09455} }

Figure 1: Overview of the proposed TLDR, a dimensionality reduction method. Given a set of feature vectors in a generic input space, we use nearest neighbors to define a set of feature pairs whose proximity we want to preserve. We then learn a dimensionality-reduction function (the encoder) by encouraging neighbors in the input space to have similar representations. We learn it jointly with an auxiliary projector that produces high dimensional representations, where we compute the Barlow Twins…

## References

SHOWING 1-10 OF 66 REFERENCES

Dimensionality Reduction by Learning an Invariant Mapping

- Mathematics, Computer Science2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)
- 2006

This work presents a method - called Dimensionality Reduction by Learning an Invariant Mapping (DrLIM) - for learning a globally coherent nonlinear function that maps the data evenly to the output manifold.

TriMap: Large-scale Dimensionality Reduction Using Triplets

- Computer Science, MathematicsArXiv
- 2019

A dimensionality reduction technique based on triplet constraints that preserves the global accuracy of the data better than the other commonly used methods such as t-SNE, LargeVis, and UMAP is introduced.

Sampling Matters in Deep Embedding Learning

- Computer Science2017 IEEE International Conference on Computer Vision (ICCV)
- 2017

This paper proposes distance weighted sampling, which selects more informative and stable examples than traditional approaches, and shows that a simple margin based loss is sufficient to outperform all other loss functions.

Mining on Manifolds: Metric Learning Without Labels

- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018

A novel unsupervised framework for hard training example mining and models are on par or are outperforming prior models that are fully or partially supervised for fine-grained classification and particular object retrieval.

Whitening for Self-Supervised Representation Learning

- Computer Science, MathematicsICML
- 2021

This paper proposes a different direction and a new loss function for self-supervised learning which is based on the whitening of the latent-space features and empirically shows that this loss accelerates self- supervised training and the learned representations are much more effective for downstream tasks than previously published work.

Deep Image Retrieval: Learning Global Representations for Image Search

- Computer ScienceECCV
- 2016

This work proposes a novel approach for instance-level image retrieval that produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors by leveraging a ranking framework and projection weights to build the region features.

UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction

- Computer Science, MathematicsArXiv
- 2018

The UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance.

Guided Similarity Separation for Image Retrieval

- Computer ScienceNeurIPS
- 2019

This work proposes a different approach where graph convolutional networks are leveraged to directly encode neighbor information into image descriptors, and introduces an unsupervised loss based on pairwise separation of image similarities.

Laplacian Eigenmaps for Dimensionality Reduction and Data Representation

- Computer Science, MathematicsNeural Computation
- 2003

This work proposes a geometrically motivated algorithm for representing the high-dimensional data that provides a computationally efficient approach to nonlinear dimensionality reduction that has locality-preserving properties and a natural connection to clustering.

Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment

- Mathematics
- 2004

We present a new algorithm for manifold learning and nonlinear dimensionality reduction. Based on a set of unorganized da-ta points sampled with noise from a parameterized manifold, the local…