Transferability and Hardness of Supervised Classification Tasks
@article{Tran2019TransferabilityAH, title={Transferability and Hardness of Supervised Classification Tasks}, author={A. Tran and Cuong V Nguyen and Tal Hassner}, journal={2019 IEEE/CVF International Conference on Computer Vision (ICCV)}, year={2019}, pages={1395-1405} }
We propose a novel approach for estimating the difficulty and transferability of supervised classification tasks. Unlike previous work, our approach is solution agnostic and does not require or assume trained models. Instead, we estimate these values using an information theoretic approach: treating training labels as random variables and exploring their statistics. When transferring from a source to a target task, we consider the conditional entropy between two such variables (i.e., label…
Figures and Tables from this paper
73 Citations
Towards Estimating Transferability using Hard Subsets
- Computer ScienceArXiv
- 2023
This work proposes HASTE (HArd Subset TransfErability), a new strategy to estimate the transferability of a source model to a particular target task using only a harder subset of target data and shows that HASTE-modified metrics are consistently better or on par with the state-of-the-art transferability metrics.
Wasserstein Task Embedding for Measuring Task Similarities
- Computer ScienceArXiv
- 2022
The optimal transport theory is leveraged and a novel task embedding for supervised classification that is model-agnostic, training- free, and capable of handling (partially) disjoint label sets is proposed.
PACTran: PAC-Bayesian Metrics for Estimating the Transferability of Pretrained Models to Classification Tasks
- Computer ScienceECCV
- 2022
This paper shows how to derive PACTran metrics from the optimal PAC-Bayesian bound under the transfer learning setting, and empirically evaluates three metric instantiations of P ACTran, a theoretically grounded family of metrics for pretrained model selection and transferability measurement.
Transferability-Guided Cross-Domain Cross-Task Transfer Learning
- Computer ScienceArXiv
- 2022
Two novel transferability metrics F- OTCE and JC-OTCE are proposed to evaluate how much the source model can benefit the learning of the target task and to learn more transferable representations for cross-domain cross-task transfer learning.
Transferability Estimation using Bhattacharyya Class Separability
- Computer Science2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2022
Gaussian Bhattacharyya Coefficient (GBC) is proposed, a novel method for quantifying transferability between a source model and a target dataset and outperforms state-of-the-art transferability metrics on most evaluation criteria in the semantic segmentation settings.
Probing transfer learning with a model of synthetic correlated datasets
- Computer ScienceMach. Learn. Sci. Technol.
- 2022
Focusing on the problem of training two-layer networks in a binary classification setting, this work re-think a solvable model of synthetic data as a framework for modeling correlation between data-sets and shows that this model can capture a range of salient features of transfer learning with real data.
Practical Transferability Estimation for Image Classification Tasks
- Computer ScienceArXiv
- 2021
A practical transferability metric called JC-NCE score is proposed to further improve the cross-domain cross-task transferability estimation performance, which is more efficient than the OTCE score and more accurate than the OT-based NCE score.
OTCE: A Transferability Metric for Cross-Domain Cross-Task Representations
- Computer Science2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2021
This work proposes a transferability metric called Optimal Transport based Conditional Entropy (OTCE), and uses optimal transport to estimate domain difference and the optimal coupling between source and target distributions, which is then used to derive the conditional entropy of the target task (task difference).
QUANTIFYING TASK COMPLEXITY THROUGH GENER-
- Computer Science
- 2020
This paper proposes to measure the complexity of a learning task by the minimum expected number of questions that need to be answered to solve the task, and illustrates the usefulness of the proposed measure on various binary image classification tasks using image patches as the query set.
A Mathematical Framework for Quantifying Transferability in Multi-source Transfer Learning
- Computer ScienceNeurIPS
- 2021
A mathematical framework for quantifying the transferability in multi-source transfer learning problems, with both the task similarities and the sample complexity of learning models taken into account, is proposed.
References
SHOWING 1-10 OF 73 REFERENCES
Task2Vec: Task Embedding for Meta-Learning
- Computer Science2019 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2019
A method to generate vectorial representations of visual classification tasks which can be used to reason about the nature of those tasks and their relations, and is demonstrated to be capable of predicting task similarities that match the authors' intuition about semantic and taxonomic relations between different visual tasks.
Exploring the Limits of Weakly Supervised Pretraining
- Computer ScienceECCV
- 2018
This paper presents a unique study of transfer learning with large convolutional networks trained to predict hashtags on billions of social media images and shows improvements on several image classification and object detection tasks, and reports the highest ImageNet-1k single-crop, top-1 accuracy to date.
Regularized Learning for Domain Adaptation under Label Shifts
- Computer ScienceICLR
- 2019
We propose Regularized Learning under Label shifts (RLLS), a principled and a practical domain-adaptation algorithm to correct for shifts in the label distribution between a source and a target…
Taskonomy: Disentangling Task Transfer Learning
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
This work proposes a fully computational approach for modeling the structure of space of visual tasks via finding (first and higher-order) transfer learning dependencies across a dictionary of twenty six 2D, 2.5D, 3D, and semantic tasks in a latent space and provides a computational taxonomic map for task transfer learning.
A theory of learning from different domains
- Computer ScienceMachine Learning
- 2009
A classifier-induced divergence measure that can be estimated from finite, unlabeled samples from the domains and shows how to choose the optimal combination of source and target error as a function of the divergence, the sample sizes of both domains, and the complexity of the hypothesis class.
Toward Understanding Catastrophic Forgetting in Continual Learning
- Computer ScienceArXiv
- 2019
It is shown that error rates are strongly and positively correlated to a task sequence's total complexity for some state-of-the-art algorithms and that the error rates have no or even negative correlations in some cases to sequential heterogeneity.
The Information Complexity of Learning Tasks, their Structure and their Distance
- Computer ScienceArXiv
- 2019
This work introduces an asymmetric distance in the space of learning tasks, and a framework to compute their complexity, the first to measure complexity in a way that accounts for the effect of the optimization scheme, which is critical in Deep Learning.
A Survey on Transfer Learning
- Computer ScienceIEEE Transactions on Knowledge and Data Engineering
- 2010
The relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift are discussed.
Domain Adaptation: Learning Bounds and Algorithms
- Computer ScienceCOLT
- 2009
A novel distance between distributions, discrepancy distance, is introduced that is tailored to adaptation problems with arbitrary loss functions, and Rademacher complexity bounds are given for estimating the discrepancy distance from finite samples for different loss functions.
Asymmetric multi-task learning based on task relatedness and loss
- Computer ScienceICML 2016
- 2016
We propose a novel multi-task learning method that minimizes the effect of negative transfer by allowing asymmetric transfer between the tasks based on task relatedness as well as the amount of…