• Corpus ID: 239016330

TLDR: Twin Learning for Dimensionality Reduction

  title={TLDR: Twin Learning for Dimensionality Reduction},
  author={Yannis Kalantidis and Carlos Lassance and Jon Almaz{\'a}n and Diane Larlus},
Figure 1: Overview of the proposed TLDR, a dimensionality reduction method. Given a set of feature vectors in a generic input space, we use nearest neighbors to define a set of feature pairs whose proximity we want to preserve. We then learn a dimensionality-reduction function (the encoder) by encouraging neighbors in the input space to have similar representations. We learn it jointly with an auxiliary projector that produces high dimensional representations, where we compute the Barlow Twins… 

Figures and Tables from this paper

Barlow constrained optimization for Visual Question Answering
A novel regularization for VQA models, Constrained Optimization using Barlow’s theory (COB), that improves the information content of the joint space by minimizing the redundancy and reduces the correlation between the learned feature components and thereby disentangles semantic concepts.
Domain Adaptation for Memory-Efficient Dense Retrieval
Dense retrievers encode documents into fixed dimensional embeddings. However, storing all the document embeddings within an index produces bulky indexes which are expensive to serve. Recently, BPR


Dimensionality Reduction by Learning an Invariant Mapping
This work presents a method - called Dimensionality Reduction by Learning an Invariant Mapping (DrLIM) - for learning a globally coherent nonlinear function that maps the data evenly to the output manifold.
TriMap: Large-scale Dimensionality Reduction Using Triplets
A dimensionality reduction technique based on triplet constraints that preserves the global accuracy of the data better than the other commonly used methods such as t-SNE, LargeVis, and UMAP is introduced.
Sampling Matters in Deep Embedding Learning
This paper proposes distance weighted sampling, which selects more informative and stable examples than traditional approaches, and shows that a simple margin based loss is sufficient to outperform all other loss functions.
Mining on Manifolds: Metric Learning Without Labels
A novel unsupervised framework for hard training example mining and models are on par or are outperforming prior models that are fully or partially supervised for fine-grained classification and particular object retrieval.
Deep Image Retrieval: Learning Global Representations for Image Search
This work proposes a novel approach for instance-level image retrieval that produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors by leveraging a ranking framework and projection weights to build the region features.
UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction
The UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance.
Guided Similarity Separation for Image Retrieval
This work proposes a different approach where graph convolutional networks are leveraged to directly encode neighbor information into image descriptors, and introduces an unsupervised loss based on pairwise separation of image similarities.
Laplacian Eigenmaps for Dimensionality Reduction and Data Representation
This work proposes a geometrically motivated algorithm for representing the high-dimensional data that provides a computationally efficient approach to nonlinear dimensionality reduction that has locality-preserving properties and a natural connection to clustering.
Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment
We present a new algorithm for manifold learning and nonlinear dimensionality reduction. Based on a set of unorganized da-ta points sampled with noise from a parameterized manifold, the local
Negative Evidences and Co-occurences in Image Retrieval: The Benefit of PCA and Whitening
The paper addresses large scale image retrieval with short vector representations. We study dimensionality reduction by Principal Component Analysis (PCA) and propose improvements to its different