TLDR: Twin Learning for Dimensionality Reduction
@article{Kalantidis2021TLDRTL, title={TLDR: Twin Learning for Dimensionality Reduction}, author={Yannis Kalantidis and Carlos Lassance and Jon Almaz{\'a}n and Diane Larlus}, journal={ArXiv}, year={2021}, volume={abs/2110.09455} }
Figure 1: Overview of the proposed TLDR, a dimensionality reduction method. Given a set of feature vectors in a generic input space, we use nearest neighbors to define a set of feature pairs whose proximity we want to preserve. We then learn a dimensionality-reduction function (the encoder) by encouraging neighbors in the input space to have similar representations. We learn it jointly with an auxiliary projector that produces high dimensional representations, where we compute the Barlow Twins…
2 Citations
Barlow constrained optimization for Visual Question Answering
- Computer ScienceArXiv
- 2022
A novel regularization for VQA models, Constrained Optimization using Barlow’s theory (COB), that improves the information content of the joint space by minimizing the redundancy and reduces the correlation between the learned feature components and thereby disentangles semantic concepts.
Domain Adaptation for Memory-Efficient Dense Retrieval
- Computer Science
- 2022
Dense retrievers encode documents into fixed dimensional embeddings. However, storing all the document embeddings within an index produces bulky indexes which are expensive to serve. Recently, BPR…
References
SHOWING 1-10 OF 66 REFERENCES
Dimensionality Reduction by Learning an Invariant Mapping
- Computer Science2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)
- 2006
This work presents a method - called Dimensionality Reduction by Learning an Invariant Mapping (DrLIM) - for learning a globally coherent nonlinear function that maps the data evenly to the output manifold.
TriMap: Large-scale Dimensionality Reduction Using Triplets
- Computer ScienceArXiv
- 2019
A dimensionality reduction technique based on triplet constraints that preserves the global accuracy of the data better than the other commonly used methods such as t-SNE, LargeVis, and UMAP is introduced.
Sampling Matters in Deep Embedding Learning
- Computer Science2017 IEEE International Conference on Computer Vision (ICCV)
- 2017
This paper proposes distance weighted sampling, which selects more informative and stable examples than traditional approaches, and shows that a simple margin based loss is sufficient to outperform all other loss functions.
Mining on Manifolds: Metric Learning Without Labels
- Computer Science, Mathematics2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
A novel unsupervised framework for hard training example mining and models are on par or are outperforming prior models that are fully or partially supervised for fine-grained classification and particular object retrieval.
Deep Image Retrieval: Learning Global Representations for Image Search
- Computer ScienceECCV
- 2016
This work proposes a novel approach for instance-level image retrieval that produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors by leveraging a ranking framework and projection weights to build the region features.
UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction
- Computer ScienceArXiv
- 2018
The UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance.
Guided Similarity Separation for Image Retrieval
- Computer ScienceNeurIPS
- 2019
This work proposes a different approach where graph convolutional networks are leveraged to directly encode neighbor information into image descriptors, and introduces an unsupervised loss based on pairwise separation of image similarities.
Laplacian Eigenmaps for Dimensionality Reduction and Data Representation
- Computer ScienceNeural Computation
- 2003
This work proposes a geometrically motivated algorithm for representing the high-dimensional data that provides a computationally efficient approach to nonlinear dimensionality reduction that has locality-preserving properties and a natural connection to clustering.
Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment
- Computer Science, Mathematics
- 2004
We present a new algorithm for manifold learning and nonlinear dimensionality reduction. Based on a set of unorganized da-ta points sampled with noise from a parameterized manifold, the local…
Negative Evidences and Co-occurences in Image Retrieval: The Benefit of PCA and Whitening
- Computer ScienceECCV
- 2012
The paper addresses large scale image retrieval with short vector representations. We study dimensionality reduction by Principal Component Analysis (PCA) and propose improvements to its different…