# Algorithms for metric learning via contrastive embeddings

@inproceedings{Centurion2019AlgorithmsFM, title={Algorithms for metric learning via contrastive embeddings}, author={Diego Ihara Centurion and Neshat Mohammadi and Anastasios Sidiropoulos}, booktitle={SoCG}, year={2019} }

We study the problem of supervised learning a metric space under discriminative constraints. Given a universe $X$ and sets ${\cal S}, {\cal D}\subset {X \choose 2}$ of similar and dissimilar pairs, we seek to find a mapping $f:X\to Y$, into some target metric space $M=(Y,\rho)$, such that similar objects are mapped to points at distance at most $u$, and dissimilar objects are mapped to points at distance at least $\ell$. More generally, the goal is to find a mapping of maximum accuracy (that is…

## Figures from this paper

## One Citation

Learning Lines with Ordinal Constraints

- Computer Science, MathematicsAPPROX-RANDOM
- 2020

This work studies the problem of finding a mapping from a set of points into the real line, under ordinal triple constraints, and presents an approximation algorithm for the dense case of this problem.

## References

SHOWING 1-10 OF 34 REFERENCES

Metric clustering via consistent labeling

- Computer ScienceSODA '08
- 2008

This work is the first to emphasize relative guarantees, that compare the produced solution to the optimal one for the input at hand, and provides a family of linear programming relaxations and simple randomized rounding procedures that achieve provably good approximation guarantees.

Distance Metric Learning for Large Margin Nearest Neighbor Classification

- Computer ScienceNIPS
- 2005

This paper shows how to learn a Mahalanobis distance metric for kNN classification from labeled examples in a globally integrated manner and finds that metrics trained in this way lead to significant improvements in kNN Classification.

Ordinal Embedding: Approximation Algorithms and Dimensionality Reduction

- Computer ScienceAPPROX-RANDOM
- 2008

Two polynomial-time constant-factor approximation algorithms for minimizing the relaxation in an embedding of an unweighted graph into a line metric and into a tree metric are developed.

Information-theoretic metric learning

- Computer ScienceICML '07
- 2007

An information-theoretic approach to learning a Mahalanobis distance function that can handle a wide variety of constraints and can optionally incorporate a prior on the distance function and derive regret bounds for the resulting algorithm.

Reality Distortion: Exact and Approximate Algorithms for Embedding into the Line

- Computer Science2015 IEEE 56th Annual Symposium on Foundations of Computer Science
- 2015

Algorithms for the problem of minimum distortion embeddings of finite metric spaces into the real line (or a finite subset of the line) are described, which yields a quasipolynomial running time for constant δ, and polynomial D.

Metric Learning: A Survey

- Computer ScienceFound. Trends Mach. Learn.
- 2013

Metric Learning: A Review presents an overview of existing research in this topic, including recent progress on scaling to high-dimensional feature spaces and to data sets with an extremely large number of data points.

INAPPROXIMABILITY FOR METRIC EMBEDDINGS INTO R

- Computer Science, Mathematics
- 2010

This work considers the problem of computing the smallest possible distortion for embedding of a given n-point metric space into Rd, where d is fixed (and small), and derives inapproximability with a factor roughly n1/(22d−10) for every fixed d ≥ 2 by a conceptually very simple reduction.

Inapproximability for metric embeddings into Rd

- Computer Science, Mathematics
- 2010

This work considers the problem of computing the smallest possible distortion for embedding of a given n-point metric space into ℝ d, where d is fixed (and small), and derives inapproximability with a factor roughly n 1/(22d-10) for every fixed d ≥ 2 by a conceptually very simple reduction.

Probabilistic approximation of metric spaces and its algorithmic applications

- Computer Science, MathematicsProceedings of 37th Conference on Foundations of Computer Science
- 1996

It is proved that any metric space can be probabilistically-approximated by hierarchically well-separated trees (HST) with a polylogarithmic distortion.

Learning task-specific similarity

- Computer Science
- 2005

An algorithmic approach to learning similarity from examples of what objects are deemed similar according to the task-specific notion of similarity at hand, as well as optional negative examples, which allows to predict when two previously unseen examples are similar and to efficiently search a very large database for examples similar to a query.