Zero-Shot Task Transfer

@article{Pal2019ZeroShotTT,
  title={Zero-Shot Task Transfer},
  author={Arghya Pal and Vineeth N. Balasubramanian},
  journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2019},
  pages={2184-2193}
}
In this work, we present a novel meta-learning algorithm that regresses model parameters for novel tasks for which no ground truth is available (zero-shot tasks. [...] Key Result To the best of our knowledge, this is the first such effort on zero-shot learning in the task space.Expand
Task Aligned Generative Meta-learning for Zero-shot Learning
TLDR
This work proposes a novel Task-aligned Generative Meta-learning model for Zeroshot learning (TGMZ), aiming to mitigate the potentially biased training and to enable meta-ZSL to accommodate realworld datasets that contain diverse distributions. Expand
Zero-shot task adaptation by homoiconic meta-mapping
TLDR
This work draws inspiration from functional programming and recent work in meta-learning to propose a class of Homoiconic Meta-Mapping approaches that represent data points and tasks in a shared latent space, and learn to infer transformations of that space. Expand
Transforming task representations to allow deep learning models to perform novel tasks
TLDR
This work proposes meta-mappings, higher-order tasks that transform basic task representations that provide insight into a possible computational basis of intelligent adaptability, and offers a possible framework for modeling cognitive flexibility and building more flexible artificial intelligence. Expand
Attribute-Modulated Generative Meta Learning for Zero-Shot Classification
TLDR
An Attribute-Modulated generAtive meta-model for Zero-shot learning (AMAZ) is proposed that outperforms state-of-the-art methods by 3.8% and 5.1% in ZSL and generalized ZSL settings, respectively, demonstrating the superiority of the method. Expand
A Meta-Learning Framework for Generalized Zero-Shot Learning
TLDR
This paper proposes a meta-learning based generative model based on integrating model-agnostic meta learning with a Wasserstein GAN (WGAN) to handle $(i)$ and $(ii)$, and uses a novel task distribution to handle ($ii)$. Expand
LSM: Learning Subspace Minimization for Low-Level Vision
TLDR
This work replaces the heuristic regularization term with a data-driven learnable subspace constraint, and preserves the data term to exploit domain knowledge derived from the first principles of a task to solve the energy minimization problem in low-level vision tasks. Expand
Learning Across Tasks and Domains
TLDR
A novel adaptation framework that can operate across both task and domains and is complementary to existing domain adaptation techniques and extends them to cross tasks scenarios providing additional performance gains is introduced. Expand
Introducing Generative Models to Facilitate Multi-Task Visual Learning
Generative modeling has recently shown great promise in computer vision, but it has mostly focused on synthesizing visually realistic images. During my graduate study and research, motivated byExpand
Generative Modeling for Multi-task Visual Learning
TLDR
This paper considers a novel problem of learning a shared generative model that is useful across various visual perception tasks, and proposes a general multi-task oriented generative modeling (MGM) framework, by coupling a discriminative multi- task network with a generative network. Expand
Task Affinity with Maximum Bipartite Matching in Few-Shot Learning
TLDR
This work proposes an asymmetric affinity score for representing the complexity of utilizing the knowledge of one task for learning another one and uses this score to propose a novel algorithm for the few-shot learning problem. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 61 REFERENCES
Using Task Features for Zero-Shot Knowledge Transfer in Lifelong Learning
TLDR
It is shown that using task descriptors improves the performance of the learned task policies, providing both theoretical justification for the benefit and empirical demonstration of the improvement across a variety of dynamical control problems. Expand
Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning
TLDR
A new RL problem where the agent should learn to execute sequences of instructions after learning useful skills that solve subtasks is introduced and a new neural architecture in the meta controller that learns when to update the subtask is proposed, which makes learning more efficient. Expand
Taskonomy: Disentangling Task Transfer Learning
TLDR
This work proposes a fully computational approach for modeling the structure of space of visual tasks via finding (first and higher-order) transfer learning dependencies across a dictionary of twenty six 2D, 2.5D, 3D, and semantic tasks in a latent space and provides a computational taxonomic map for task transfer learning. Expand
Training Complex Models with Multi-Task Weak Supervision
TLDR
This work shows that by solving a matrix completion-style problem, it can recover the accuracies of these multi-task sources given their dependency structure, but without any labeled data, leading to higher-quality supervision for training an end model. Expand
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learningExpand
Learning to Model the Tail
TLDR
Results on image classification datasets (SUN, Places, and ImageNet) tuned for the long-tailed setting, that significantly outperform common heuristics, such as data resampling or reweighting. Expand
Learning Transferrable Representations for Unsupervised Domain Adaptation
TLDR
A unified deep learning framework where the representation, cross domain transformation, and target label inference are all jointly optimized in an end-to-end fashion for unsupervised domain adaptation is proposed. Expand
Socratic Learning: Augmenting Generative Models to Incorporate Latent Subsets in Training Data
TLDR
Socratic learning is presented, a paradigm that uses feedback from a corresponding discriminative model to automatically identify subsets in the training data and augments the structure of the generative model accordingly, and shows that without any ground truth labels, the augmented generative models reduces error by up to 56.06% for a relation extraction task. Expand
Marr Revisited: 2D-3D Alignment via Surface Normal Prediction
TLDR
A skip-network model built on the pre-trained Oxford VGG convolutional neural network (CNN) for surface normal prediction achieves state-of-the-art accuracy on the NYUv2 RGB-D dataset, and recovers fine object detail compared to previous methods. Expand
Convex multi-task feature learning
TLDR
It is proved that the method for learning sparse representations shared across multiple tasks is equivalent to solving a convex optimization problem for which there is an iterative algorithm which converges to an optimal solution. Expand
...
1
2
3
4
5
...