A Unified Batch Selection Policy for Active Metric Learning

@inproceedings{Kumari2021AUB,
  title={A Unified Batch Selection Policy for Active Metric Learning},
  author={Priyadarshini Kumari and Siddhartha Chaudhuri and Vivek S. Borkar and Subhasis Chaudhuri},
  booktitle={ECML/PKDD},
  year={2021}
}
Active metric learning is the problem of incrementally selecting highutility batches of training data (typically, ordered triplets) to annotate, in order to progressively improve a learned model of a metric over some input domain as rapidly as possible. Standard approaches, which independently assess the informativeness of each triplet in a batch, are susceptible to highly correlated batches with many redundant triplets and hence low overall utility. While a recent work [20] proposes batch… 

References

SHOWING 1-10 OF 35 REFERENCES
Bayesian Batch Active Learning as Sparse Subset Approximation
TLDR
A novel Bayesian batch active learning approach that mitigates standard greedy procedures for large-scale regression and classification tasks and derive interpretable closed-form solutions akin to existing active learning procedures for linear models, and generalize to arbitrary models using random projections.
BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning
We develop BatchBALD, a tractable approximation to the mutual information between a batch of points and model parameters, which we use as an acquisition function to select multiple informative points
Deep Active Learning: Unified and Principled Method for Query and Training
TLDR
A unified and principled method for both the querying and training processes in deep batch active learning is proposed, providing theoretical insights from the intuition of modeling the interactive procedure in active learning as distribution matching by adopting the Wasserstein distance.
Active Perceptual Similarity Modeling with Auxiliary Information
TLDR
This work considers the problem of actively learning from triplets -finding which queries are most useful for learning and introduces an active learning scheme to find queries that are informative for quickly learning both the relevant aspects of auxiliary data and the directly-learned similarity components.
Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds
TLDR
This work designs a new algorithm for batch active learning with deep neural network models that samples groups of points that are disparate and high-magnitude when represented in a hallucinated gradient space, and shows that while other approaches sometimes succeed for particular batch sizes or architectures, BADGE consistently performs as well or better, making it a versatile option for practical active learning problems.
Distance Metric Learning with Application to Clustering with Side-Information
TLDR
This paper presents an algorithm that, given examples of similar (and, if desired, dissimilar) pairs of points in �”n, learns a distance metric over ℝn that respects these relationships.
Deep Metric Learning Beyond Binary Supervision
TLDR
A new triplet loss is proposed that allows distance ratios in the label space to be preserved in the learned metric space and enables the model to learn the degree of similarity rather than just the order.
Variational Adversarial Active Learning
TLDR
A pool-based semi-supervised active learning algorithm that implicitly learns this sampling mechanism in an adversarial manner that learns an effective low dimensional latent space in large-scale settings and provides for a computationally efficient sampling method.
Active Learning for Convolutional Neural Networks: A Core-Set Approach
TLDR
This work defines the problem of active learning as core-set selection as choosing set of points such that a model learned over the selected subset is competitive for the remaining data points, and presents a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints.
Adam: A Method for Stochastic Optimization
TLDR
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
...
...