Corpus ID: 222140630

The Traveling Observer Model: Multi-task Learning Through Spatial Variable Embeddings

@article{Meyerson2021TheTO,
  title={The Traveling Observer Model: Multi-task Learning Through Spatial Variable Embeddings},
  author={Elliot Meyerson and Risto Miikkulainen},
  journal={ArXiv},
  year={2021},
  volume={abs/2010.02354}
}
This paper frames a general prediction system as an observer traveling around a continuous space, measuring values at some locations, and predicting them at others. The observer is completely agnostic about any particular task being solved; it cares only about measurement locations and their values. This perspective leads to a machine learning framework in which seemingly unrelated tasks can be solved by a single model, by embedding their input and output variables into a shared space. An… Expand

Figures and Tables from this paper

Evolution of neural networks
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercialExpand
Improving Predictors via Combination Across Diverse Task Categories
  • Kwang In Kim
  • Computer Science
  • ICML
  • 2021
TLDR
This work aligns the heterogeneous domains of different predictors in a shared latent space to facilitate comparisons of predictors independently of the domains on which they are originally defined and demonstrates that this approach often significantly improves the performances of the initial predictors. Expand

References

SHOWING 1-10 OF 58 REFERENCES
Convex multi-task feature learning
TLDR
It is proved that the method for learning sparse representations shared across multiple tasks is equivalent to solving a convex optimization problem for which there is an iterative algorithm which converges to an optimal solution. Expand
Distral: Robust multitask reinforcement learning
TLDR
This work proposes a new approach for joint training of multiple tasks, which it refers to as Distral (Distill & transfer learning), and shows that the proposed learning process is more robust and more stable---attributes that are critical in deep reinforcement learning. Expand
Modular Universal Reparameterization: Deep Multi-task Learning Across Diverse Domains
TLDR
Deep multi-task learning is extended to the setting where there is no obvious overlap between task architectures, and it is confirmed that sharing learned functionality across diverse domains and architectures is indeed beneficial, thus establishing a key ingredient for general problem solving in the future. Expand
Learning multiple visual domains with residual adapters
TLDR
This paper develops a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains and introduces the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very differentVisual domains and measures their ability to recognize well uniformly. Expand
A Unified Perspective on Multi-Domain and Multi-Task Learning
TLDR
This framework unifies MDL and MTL as well as encompassing various classic and recent MTL/MDL algorithms by interpreting them as different ways of constructing semantic descriptors, which provides an alternative pipeline for zero-shot learning (ZSL). Expand
One Model To Learn Them All
TLDR
It is shown that tasks with less data benefit largely from joint training with other tasks, while performance on large tasks degrades only slightly if at all, and that adding a block to the model never hurts performance and in most cases improves it on all tasks. Expand
Learning Task Grouping and Overlap in Multi-task Learning
TLDR
This work proposes a framework for multi-task learning that enables one to selectively share the information across the tasks, based on the assumption that task parameters within a group lie in a low dimensional subspace but allows the tasks in different groups to overlap with each other in one or more bases. Expand
A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks
TLDR
A joint many-task model together with a strategy for successively growing its depth to solve increasingly complex tasks and uses a simple regularization term to allow for optimizing all model weights to improve one task’s loss without exhibiting catastrophic interference of the other tasks. Expand
Learning with Whom to Share in Multi-task Feature Learning
TLDR
This paper forms the problem of multi-task learning of shared feature representations among tasks, while simultaneously determining "with whom" each task should share as a mixed integer programming and provides an alternating minimization technique to solve the optimization problem of jointly identifying grouping structures and parameters. Expand
Cross-Stitch Networks for Multi-task Learning
TLDR
This paper proposes a principled approach to learn shared representations in Convolutional Networks using multitask learning using a new sharing unit: "cross-stitch" unit that combines the activations from multiple networks and can be trained end-to-end. Expand
...
1
2
3
4
5
...