Boosting for Comparison-Based Learning

@inproceedings{Perrot2019BoostingFC,
  title={Boosting for Comparison-Based Learning},
  author={Micha{\"e}l Perrot and Ulrike von Luxburg},
  booktitle={IJCAI},
  year={2019}
}
We consider the problem of classification in a comparison-based setting: given a set of objects, we only have access to triplet comparisons of the form ``object A is closer to object B than to object C.'' In this paper we introduce TripletBoost, a new method that can learn a classifier just from such triplet comparisons. The main idea is to aggregate the triplets information into weak classifiers, which can subsequently be boosted to a strong classifier. Our method has two main advantages: (i… 
Classification from Triplet Comparison Data
TLDR
This letter proposes an unbiased estimator for the classification risk under the empirical risk minimization framework, which inherently has the advantage that any surrogate loss function and any model, including neural networks, can be easily applied.
Efficient Data Analytics on Augmented Similarity Triplets
TLDR
This work gives an efficient method of augmenting the triplets data, by utilizing additional implicit information inferred from the existing data, and proposes a novel set of algorithms for common supervised and unsupervised machine learning tasks based on triplets.
Comparison-based centrality measures
TLDR
This paper systematically investigate comparison-based centrality measures on triplets and theoretically analyze their underlying Euclidean notion of centrality, and proposes a third measure, which is a natural compromise between these two.
Partitioned K-nearest neighbor local depth for scalable comparison-based learning
TLDR
Partitioned Nearest Neighbors Local Depth is introduced, a computationally tractable variant of PaLD leveraging the K-nearest neighbors digraph on S and shows that the probability of randomization-induced error δ in PaNNLD is no more than 2e−δ K.
Learning from Aggregate Observations
TLDR
This paper presents a probabilistic framework that is applicable to a variety of aggregate observations, e.g., pairwise similarity for classification and mean/difference/rank observation for regression.

References

SHOWING 1-10 OF 52 REFERENCES
Multiview Triplet Embedding: Learning Attributes in Multiple Maps
TLDR
The Multiview Triplet Embedding (MVTE) algorithm is proposed that produces a number of low-dimensional maps, each corresponding to one of the hidden attributes in a set of relative distance judgments in the form of triplets.
Stochastic triplet embedding
TLDR
A new technique called t-Distributed Stochastic Triplet Embedding (t-STE) is introduced that collapses similar points and repels dissimilar points in the embedding - even when all triplet constraints are satisfied.
Improved Boosting Algorithms using Confidence-Rated Predictions
We describe several improvements to Freund and Schapire‘s AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give a
Improved Boosting Algorithms Using Confidence-rated Predictions
We describe several improvements to Freund and Schapire's AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give a
Active Classification with Comparison Queries
TLDR
An extension of active learning in which the learning algorithm may ask the annotator to compare the distances of two examples from the boundary of their label-class is studied, and a combinatorial dimension is identified that captures the query complexity when each additional query is determined by O(1) examples.
A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting
TLDR
The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and it is shown that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems.
A decision-theoretic generalization of on-line learning and an application to boosting
TLDR
The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and the multiplicative weightupdate Littlestone Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems.
Multi-class AdaBoost ∗
TLDR
A new algorithm is proposed that naturally extends the original AdaBoost algorithm to the multiclass case without reducing it to multiple two-class problems and is extremely easy to implement and is highly competitive with the best currently available multi-class classification methods.
Adaptively Learning the Crowd Kernel
TLDR
An algorithm that, given n objects, learns a similarity matrix over all n2 pairs, from crowdsourced data alone is introduced, and SVMs reveal that the crowd kernel captures prominent and subtle features across a number of domains.
Boosting the margin: A new explanation for the effectiveness of voting methods
TLDR
It is shown that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error.
...
1
2
3
4
5
...