Corpus ID: 59291918

Self-Supervised Generalisation with Meta Auxiliary Learning

@inproceedings{Liu2019SelfSupervisedGW,
  title={Self-Supervised Generalisation with Meta Auxiliary Learning},
  author={Shikun Liu and Andrew J. Davison and Edward Johns},
  booktitle={NeurIPS},
  year={2019}
}
Learning with auxiliary tasks can improve the ability of a primary task to generalise. However, this comes at the cost of manually labelling auxiliary data. We propose a new method which automatically learns appropriate labels for an auxiliary task, such that any supervised learning task can be improved without requiring access to any further data. The approach is to train two neural networks: a label-generation network to predict the auxiliary labels, and a multi-task network to train the… Expand
Auxiliary Learning by Implicit Differentiation
TLDR
A novel framework, AuxiLearn, is proposed that targets both challenges of designing useful auxiliary tasks and combining auxiliary tasks into a single coherent loss, based on implicit differentiation. Expand
A Novel Multi-Task Self-Supervised Representation Learning Paradigm
Self-supervised learning can be adopted to mine deep semantic information of visual data without a large number of human-annotated supervision by using a pretext task to pretrain a model. In thisExpand
Learning to Generalize One Sample at a Time with Self-Supervision
TLDR
This paper proposes to use self-supervised learning to achieve domain generalization and adaptation, and considers learning regularities from non annotated data as an auxiliary task, and cast the problem within an Auxiliary Learning principled framework. Expand
Self-Supervised Prototypical Transfer Learning for Few-Shot Classification
TLDR
It is demonstrated that the self-supervised prototypical transfer learning approach ProtoTransfer outperforms state-of-the-art unsupervised meta-learning methods on few-shot tasks from the mini-ImageNet dataset and has comparable performance to supervised methods, but requires orders of magnitude fewer labels. Expand
Image Change Captioning by Learning from an Auxiliary Task
We tackle the challenging task of image change captioning. The goal is to describe the subtle difference between two very similar images by generating a sentence caption. While the recent methodsExpand
Adaptive Transfer Learning on Graph Neural Networks
TLDR
This work proposes a new transfer learning paradigm on GNNs which could effectively leverage self-supervised tasks as auxiliary tasks to help the target task and significantly improve the performance compared to state-of-the-art methods. Expand
Leveraging Auxiliary Tasks with Affinity Learning for Weakly Supervised Semantic Segmentation
TLDR
This work proposes a novel weakly supervised multi-task framework termed as AuxSegNet, to leverage saliency detection and multi-label image classification as auxiliary tasks to improve the primary task of semantic segmentation using only image-level ground-truth labels. Expand
Test-Time Fast Adaptation for Dynamic Scene Deblurring via Meta-Auxiliary Learning
In this paper, we tackle the problem of dynamic scene deblurring. Most existing deep end-to-end learning approaches adopt the same generic model for all unseen test images. These solutions areExpand
SALT: Subspace Alignment as an Auxiliary Learning Task for Domain Adaptation
TLDR
The proposed approach represents a unique fusion of geometric and model-based alignment with gradients from a data-driven primary task, and is a simple framework that achieves comparable or sometimes outperforms state-of-the-art on multiple standard benchmarks. Expand
Invenio: Discovering Hidden Relationships Between Tasks/Domains Using Structured Meta Learning
TLDR
Invenio is presented, a structured meta-learning algorithm to infer semantic similarities between a given set of tasks and to provide insights into the complexity of transferring knowledge between different tasks, using challenging task and domain databases. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 44 REFERENCES
End-To-End Multi-Task Learning With Attention
TLDR
The proposed Multi-Task Attention Network (MTAN) consists of a single shared network containing a global feature pool, together with a soft-attention module for each task, which allows learning of task-specific feature-level attention. Expand
Multi-task Self-Supervised Visual Learning
TLDR
The results show that deeper networks work better, and that combining tasks—even via a na¨ýve multihead architecture—always improves performance. Expand
Auxiliary Tasks in Multi-task Learning
TLDR
The proposed deep multi-task CNN architecture was trained on various combination of tasks using synMT, and the experiments confirmed that auxiliary tasks can indeed boost network performance, both in terms of final results and training time. Expand
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learningExpand
Fine-Grained Visual Categorization using Meta-Learning Optimization with Sample Selection of Auxiliary Data
TLDR
A new deep FGVC model termed MetaFGNet is proposed, based on a novel regularized meta-learning objective, which aims to guide the learning of network parameters so that they are optimal for adapting to the target FGVC task. Expand
Meta-SGD: Learning to Learn Quickly for Few Shot Learning
TLDR
Meta-SGD, an SGD-like, easily trainable meta-learner that can initialize and adapt any differentiable learner in just one step, shows highly competitive performance for few-shot learning on regression, classification, and reinforcement learning. Expand
Cross-Stitch Networks for Multi-task Learning
TLDR
This paper proposes a principled approach to learn shared representations in Convolutional Networks using multitask learning using a new sharing unit: "cross-stitch" unit that combines the activations from multiple networks and can be trained end-to-end. Expand
Optimization as a Model for Few-Shot Learning
Matching Networks for One Shot Learning
TLDR
This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. Expand
Multitask Learning with Low-Level Auxiliary Tasks for Encoder-Decoder Based Speech Recognition
TLDR
This work hypothesizes that using intermediate representations as auxiliary supervision at lower levels of deep networks may be a good way of combining the advantages of end-to-end training and more traditional pipeline approaches. Expand
...
1
2
3
4
5
...