• Corpus ID: 17204725

Multiview Triplet Embedding: Learning Attributes in Multiple Maps

@inproceedings{Amid2015MultiviewTE,
  title={Multiview Triplet Embedding: Learning Attributes in Multiple Maps},
  author={Ehsan Amid and Antti Ukkonen},
  booktitle={ICML},
  year={2015}
}
For humans, it is usually easier to make statements about the similarity of objects in relative, rather than absolute terms. Moreover, subjective comparisons of objects can be based on a number of different and independent attributes. For example, objects can be compared based on their shape, color, etc. In this paper, we consider the problem of uncovering these hidden attributes given a set of relative distance judgments in the form of triplets. The attribute that was used to generate a… 

Figures and Tables from this paper

t-Exponential Triplet Embedding
TLDR
This paper introduces a new technique, called t-Exponential Triplet Embedding (t-ETE), that produces high-quality embeddings even in the presence of significant amount of noise in the triplets, giving rise to new insights on real-world data, which have been impossible to observe using the previous techniques.
Towards Latent Attribute Discovery From Triplet Similarities
TLDR
This paper introduces Latent Similarity Networks (LSNs): a simple and effective technique to discover the underlying latent notions of similarity in data without any explicit attribute supervision.
Jointly Learning Multiple Measures of Similarities from Triplet Comparisons
TLDR
This work considers the problem of mapping objects into view-specific embeddings where the distance between them is consistent with the similarity comparisons of the form "from the t-th view, object A is more similar to B than to C".
Large scale representation learning from triplet comparisons
TLDR
The fundamental problem of representation learning from a new perspective is discussed, and a fast algorithm based on DNNs is provided that constructs a Euclidean representation for the items, using solely the answers to the above-mentioned triplet comparisons.
L ARGE SCALE REPRESENTATION LEARNING FROM TRIPLET
  • Computer Science
  • 2019
TLDR
The fundamental problem of representation learning from a new perspective is discussed, and a fast algorithm based on DNNs that constructs a Euclidean representation for items is provided, using solely the answers to the above-mentioned triplet comparisons.
Learning Universal Embeddings from Attributes
TLDR
A multi-task framework to learn universal embeddings by mapping them to different subspaces, each corresponds to one particular attribute and is supervised by the triplet similarity, which achieves strong results for attribute prediction, low-shot generalization as well as off-task recognition.
Efficient Data Analytics on Augmented Similarity Triplets
TLDR
This work gives an efficient method of augmenting the triplets data, by utilizing additional implicit information inferred from the existing data, and proposes a novel set of algorithms for common supervised and unsupervised machine learning tasks based on triplets.
Conditional Similarity Networks
TLDR
This work proposes Conditional Similarity Networks (CSNs) that learn embeddings differentiated into semantically distinct subspaces that capture the different notions of similarities.
Bundle Optimization for Multi-aspect Embedding
TLDR
This paper presents a method for learning the semantic similarity among images, inferring their latent aspects and embedding them into multi-spaces corresponding to their semantic aspects, and shows that this approach significantly outperforms state-of-the-art multi-embedding approaches on various datasets, and scales well for large multi-aspect similarity measures.
...
...

References

SHOWING 1-10 OF 29 REFERENCES
Cost-Effective HITs for Relative Similarity Comparisons
TLDR
It is shown that rather than changing the sampling algorithm, simple changes to the crowdsourcing UI can lead to much higher quality embeddings, and this work proposes best practices for creating cost effective human intelligence tasks for collecting triplets.
Stochastic triplet embedding
TLDR
A new technique called t-Distributed Stochastic Triplet Embedding (t-STE) is introduced that collapses similar points and repels dissimilar points in the embedding - even when all triplet constraints are satisfied.
Adaptively Learning the Crowd Kernel
TLDR
An algorithm that, given n objects, learns a similarity matrix over all n2 pairs, from crowdsourced data alone is introduced, and SVMs reveal that the crowd kernel captures prominent and subtle features across a number of domains.
Similarity Component Analysis
TLDR
This paper proposes Similarity Component Analysis (SCA), a probabilistic graphical model that discovers latent components from data with significantly better prediction accuracies than competing methods and shows how SCA can be instrumental in exploratory analysis of data.
Whose Vote Should Count More: Optimal Integration of Labels from Labelers of Unknown Expertise
TLDR
A probabilistic model is presented and it is demonstrated that the model outperforms the commonly used "Majority Vote" heuristic for inferring image labels, and is robust to both noisy and adversarial labelers.
The Crowd-Median Algorithm
TLDR
This paper considers the problem of computing a centroid of a data set, a key component in many data-analysis applications such as clustering, using a very simple human intelligence task (HIT) and proposes a human computation based variant of the k-means clustering algorithm.
Visualizing Similarity Data with a Mixture of Maps
We show how to visualize a set of pairwise similarities between objects by using several different two-dimensional maps, each of which captures different aspects of the similarity structure. When the
Eliminating Spammers and Ranking Annotators for Crowdsourced Labeling Tasks
TLDR
An empirical Bayesian algorithm called SpEM is proposed that iteratively eliminates the spammers and estimates the consensus labels based only on the good annotators and is motivated by defining a spammer score that can be used to rank the annotators.
Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifold
TLDR
Locally linear embedding (LLE), an unsupervised learning algorithm that computes low dimensional, neighborhood preserving embeddings of high dimensional data, is described and several extensions that enhance its performance are discussed.
Generalized Non-metric Multidimensional Scaling
TLDR
It is argued that this setting is more natural in some experimental settings and proposed algorithm based on convex optimization techniques to solve the non-metric multidimensional scaling problem in which only a set of order relations of the form dij < dkl are provided is provided.
...
...