Boosting for Comparison-Based Learning
@inproceedings{Perrot2019BoostingFC, title={Boosting for Comparison-Based Learning}, author={Micha{\"e}l Perrot and Ulrike von Luxburg}, booktitle={IJCAI}, year={2019} }
We consider the problem of classification in a comparison-based setting: given a set of objects, we only have access to triplet comparisons of the form ``object A is closer to object B than to object C.'' In this paper we introduce TripletBoost, a new method that can learn a classifier just from such triplet comparisons. The main idea is to aggregate the triplets information into weak classifiers, which can subsequently be boosted to a strong classifier. Our method has two main advantages: (i…
Figures and Tables from this paper
5 Citations
Classification from Triplet Comparison Data
- Computer ScienceNeural Computation
- 2020
This letter proposes an unbiased estimator for the classification risk under the empirical risk minimization framework, which inherently has the advantage that any surrogate loss function and any model, including neural networks, can be easily applied.
Efficient Data Analytics on Augmented Similarity Triplets
- Computer ScienceArXiv
- 2019
This work gives an efficient method of augmenting the triplets data, by utilizing additional implicit information inferred from the existing data, and proposes a novel set of algorithms for common supervised and unsupervised machine learning tasks based on triplets.
Comparison-based centrality measures
- Computer ScienceInt. J. Data Sci. Anal.
- 2021
This paper systematically investigate comparison-based centrality measures on triplets and theoretically analyze their underlying Euclidean notion of centrality, and proposes a third measure, which is a natural compromise between these two.
Partitioned K-nearest neighbor local depth for scalable comparison-based learning
- Computer ScienceArXiv
- 2021
Partitioned Nearest Neighbors Local Depth is introduced, a computationally tractable variant of PaLD leveraging the K-nearest neighbors digraph on S and shows that the probability of randomization-induced error δ in PaNNLD is no more than 2e−δ K.
Learning from Aggregate Observations
- Computer ScienceNeurIPS
- 2020
This paper presents a probabilistic framework that is applicable to a variety of aggregate observations, e.g., pairwise similarity for classification and mean/difference/rank observation for regression.
References
SHOWING 1-10 OF 52 REFERENCES
Multiview Triplet Embedding: Learning Attributes in Multiple Maps
- Computer ScienceICML
- 2015
The Multiview Triplet Embedding (MVTE) algorithm is proposed that produces a number of low-dimensional maps, each corresponding to one of the hidden attributes in a set of relative distance judgments in the form of triplets.
Stochastic triplet embedding
- Computer Science2012 IEEE International Workshop on Machine Learning for Signal Processing
- 2012
A new technique called t-Distributed Stochastic Triplet Embedding (t-STE) is introduced that collapses similar points and repels dissimilar points in the embedding - even when all triplet constraints are satisfied.
Improved Boosting Algorithms using Confidence-Rated Predictions
- Computer ScienceCOLT
- 1998
We describe several improvements to Freund and Schapire‘s AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give a…
Improved Boosting Algorithms Using Confidence-rated Predictions
- Computer ScienceCOLT' 98
- 1998
We describe several improvements to Freund and Schapire's AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give a…
Active Classification with Comparison Queries
- Computer Science2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)
- 2017
An extension of active learning in which the learning algorithm may ask the annotator to compare the distances of two examples from the boundary of their label-class is studied, and a combinatorial dimension is identified that captures the query complexity when each additional query is determined by O(1) examples.
A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting
- Computer ScienceCOLT 1997
- 1997
The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and it is shown that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems.
A decision-theoretic generalization of on-line learning and an application to boosting
- Computer ScienceEuroCOLT
- 1995
The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and the multiplicative weightupdate Littlestone Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems.
Multi-class AdaBoost ∗
- Computer Science
- 2009
A new algorithm is proposed that naturally extends the original AdaBoost algorithm to the multiclass case without reducing it to multiple two-class problems and is extremely easy to implement and is highly competitive with the best currently available multi-class classification methods.
Adaptively Learning the Crowd Kernel
- Computer ScienceICML
- 2011
An algorithm that, given n objects, learns a similarity matrix over all n2 pairs, from crowdsourced data alone is introduced, and SVMs reveal that the crowd kernel captures prominent and subtle features across a number of domains.
Boosting the margin: A new explanation for the effectiveness of voting methods
- Computer ScienceICML
- 1997
It is shown that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error.