Using Task Descriptions in Lifelong Machine Learning for Improved Performance and Zero-Shot Transfer

@article{Isele2020UsingTD,
  title={Using Task Descriptions in Lifelong Machine Learning for Improved Performance and Zero-Shot Transfer},
  author={David Isele and Mohammad Rostami and Eric Eaton},
  journal={J. Artif. Intell. Res.},
  year={2020},
  volume={67},
  pages={673-704}
}
Knowledge transfer between tasks can improve the performance of learned models, but requires an accurate estimate of inter-task relationships to identify the relevant knowledge to transfer. These inter-task relationships are typically estimated based on training data for each task, which is inefficient in lifelong learning settings where the goal is to learn each consecutive task rapidly from as little data as possible. To reduce this burden, we develop a lifelong learning method based on… 
ConTinTin: Continual Learning from Task Instructions
TLDR
This work defines a new learning paradigm ConTinTin (Continual Learning from Task Instructions), in which a system should learn a sequence of new tasks one by one, each task is explained by a piece of textual instruction.
Lifelong Domain Adaptation via Consolidated Internal Distribution
TLDR
An algorithm to address unsupervised domain adaptation (UDA) in continual learning (CL) settings based on consolidating the learned internal distribution for improved model generalization on new domains and benefiting from experience replay to overcome catastrophic forgetting is developed.
Zero-Shot Image Classification Using Coupled Dictionary Embedding
Cognitively Inspired Learning of Incremental Drifting Concepts
TLDR
A computational model that enables a deep neural network to learn new concepts and expand its learned knowledge to new domains incrementally in a continual learning setting by relying on the Parallel Distributed Processing theory to encode abstract concepts in an embedding space in terms of a multimodal distribution.
ACuTE: Automatic Curriculum Transfer from Simple to Complex Environments
TLDR
“ACuTE”, Automatic Curriculum Transfer from Simple to Complex Environments, a novel framework to solve the curriculum transfer problem, is presented and it is demonstrated that the approach is independent of the learning algorithm used for curriculum generation, and is Sim2Real transferable to a real world scenario using a physical robot.
Model-Based Novelty Adaptation for Open-World AI
TLDR
This paper introduces Hypothesis-Guided Model Revision over Multiple Aligned Representations (HYDRA), an approach to model-based novelty response that enables HYDRA to play the game as well as adapt to many types of novelty by making localized modifications to the domain theory.
PAC Imitation and Model-based Batch Learning of Contextual MDPs
TLDR
This work derives sample complexity bounds for direct policy learning (DPL), an imitation-learning based approach which learns from expert trajectories, and shows that there exist model classes with sample complexity exponential in their statistical complexity.
Hyperparameter Analysis for Derivative Compressive Sampling
TLDR
This work studies sensitivity of DCS with respect to algorithmic hyperparameters using brute-force search algorithm and deduces guidelines for the user to setup values for the hyperparameter for improved signal recovery performance.
Learning Transferable Knowledge Through Embedding Spaces
TLDR
This paper presents a meta-modelling framework that automates the very labor-intensive and therefore time-heavy and therefore expensive and expensive process of transferring information between locations.
Detection and Continual Learning of Novel Face Presentation Attacks
TLDR
This paper enables a deep neural network to detect anomalies in the observed input data points as potential new types of attacks by suppressing the confidence-level of the network outside the training samples’ distribution and uses experience replay to update the model to incorporate knowledge about newtypes of attacks without forgetting the past learned attack types.
...
...

References

SHOWING 1-10 OF 60 REFERENCES
Using Task Features for Zero-Shot Knowledge Transfer in Lifelong Learning
TLDR
It is shown that using task descriptors improves the performance of the learned task policies, providing both theoretical justification for the benefit and empirical demonstration of the improvement across a variety of dynamical control problems.
Towards Zero-Shot Autonomous Inter-Task Mapping through Object-Oriented Task Description
TLDR
An algorithm to autonomously estimate a Probabilistic Inter-TAsk Mapping (PITAM) across tasks described in an object-oriented manner, which requires less domain knowledge than a handcrafted Inter-Task Mapping.
Autonomous Cross-Domain Knowledge Transfer in Lifelong Policy Gradient Reinforcement Learning
TLDR
The approach efficiently optimizes a shared repository of transferable knowledge and learns projection matrices that specialize that knowledge to different task domains that can learn effectively from interleaved task domains and rapidly acquire high performance in new domains.
Learning Inter-Task Transferability in the Absence of Target Task Samples
TLDR
This paper proposes a framework for selecting source tasks in the absence of a known model or target task samples and uses meta-data associated with each task to learn the expected benefit of transfer given a source-target task pair.
Multi-Task Zero-Shot Action Recognition with Prioritised Data Augmentation
TLDR
A visual-semantic mapping with better generalisation properties and a dynamic data re-weighting method to prioritise auxiliary data that are relevant to the target classes are introduced and applied to the challenging zero-shot action recognition problem.
Joint Dictionaries for Zero-Shot Learning
TLDR
This paper proposes to learn a visual feature dictionary that has semantically meaningful atoms, learned via joint dictionary learning for the visual domain and the attribute domain, while enforcing the same sparse coding for both dictionaries.
Is Learning The n-th Thing Any Easier Than Learning The First?
  • S. Thrun
  • Computer Science, Education
    NIPS
  • 1995
TLDR
It is shown that across the board, lifelong learning approaches generalize consistently more accurately from less training data, by their ability to transfer knowledge across learning tasks.
ELLA: An Efficient Lifelong Learning Algorithm
TLDR
The proposed Efficient Lifelong Learning Algorithm (ELLA) maintains a sparsely shared basis for all task models, transfers knowledge from the basis to learn each new task, and refines the basis over time to maximize performance across all tasks.
A Survey on Transfer Learning
TLDR
The relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift are discussed.
Feature Learning and Transfer Performance Prediction for Video Reinforcement Learning Tasks via a Siamese Convolutional Neural Network
TLDR
This paper handles the negative transfer problem by a deep learning method to predict the transfer performance (positive/negative transfer) between two reinforcement learning tasks and shows the effectiveness and superiority of the method compared with the baseline methods.
...
...