• Corpus ID: 239016330

TLDR: Twin Learning for Dimensionality Reduction

  title={TLDR: Twin Learning for Dimensionality Reduction},
  author={Yannis Kalantidis and Carlos Lassance and Jon Almaz{\'a}n and Diane Larlus},
Figure 1: Overview of the proposed TLDR, a dimensionality reduction method. Given a set of feature vectors in a generic input space, we use nearest neighbors to define a set of feature pairs whose proximity we want to preserve. We then learn a dimensionality-reduction function (the encoder) by encouraging neighbors in the input space to have similar representations. We learn it jointly with an auxiliary projector that produces high dimensional representations, where we compute the Barlow Twins… 

Figures and Tables from this paper


Dimensionality Reduction by Learning an Invariant Mapping
  • R. Hadsell, S. Chopra, Yann LeCun
  • Mathematics, Computer Science
    2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)
  • 2006
This work presents a method - called Dimensionality Reduction by Learning an Invariant Mapping (DrLIM) - for learning a globally coherent nonlinear function that maps the data evenly to the output manifold.
TriMap: Large-scale Dimensionality Reduction Using Triplets
A dimensionality reduction technique based on triplet constraints that preserves the global accuracy of the data better than the other commonly used methods such as t-SNE, LargeVis, and UMAP is introduced.
Sampling Matters in Deep Embedding Learning
This paper proposes distance weighted sampling, which selects more informative and stable examples than traditional approaches, and shows that a simple margin based loss is sufficient to outperform all other loss functions.
Mining on Manifolds: Metric Learning Without Labels
A novel unsupervised framework for hard training example mining and models are on par or are outperforming prior models that are fully or partially supervised for fine-grained classification and particular object retrieval.
Whitening for Self-Supervised Representation Learning
This paper proposes a different direction and a new loss function for self-supervised learning which is based on the whitening of the latent-space features and empirically shows that this loss accelerates self- supervised training and the learned representations are much more effective for downstream tasks than previously published work.
Deep Image Retrieval: Learning Global Representations for Image Search
This work proposes a novel approach for instance-level image retrieval that produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors by leveraging a ranking framework and projection weights to build the region features.
UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction
The UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance.
Guided Similarity Separation for Image Retrieval
This work proposes a different approach where graph convolutional networks are leveraged to directly encode neighbor information into image descriptors, and introduces an unsupervised loss based on pairwise separation of image similarities.
Laplacian Eigenmaps for Dimensionality Reduction and Data Representation
This work proposes a geometrically motivated algorithm for representing the high-dimensional data that provides a computationally efficient approach to nonlinear dimensionality reduction that has locality-preserving properties and a natural connection to clustering.
Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment
We present a new algorithm for manifold learning and nonlinear dimensionality reduction. Based on a set of unorganized da-ta points sampled with noise from a parameterized manifold, the local