Socially Supervised Representation Learning: The Role of Subjectivity in Learning Efficient Representations

  title={Socially Supervised Representation Learning: The Role of Subjectivity in Learning Efficient Representations},
  author={Julius Taylor and Eleni Nisioti and Cl{\'e}ment Moulin-Frier},
  booktitle={Adaptive Agents and Multi-Agent Systems},
Despite its rise as a prominent solution to the data inefficiency of today’s machine learning models, self-supervised learning has yet to be studied from a purely multi-agent perspective. In this work, we propose that aligning internal subjective representations, which naturally arise in a multi-agent setup where agents receive partial observations of the same underlying environmental state, can lead to more data-efficient representations. We propose that multi-agent environments, where agents… 

Figures and Tables from this paper



Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style

Causal3DIdent, a dataset of high-dimensional, visually complex images with rich causal dependencies, which is used to study the effect of data augmentations performed in practice, and numerical simulations with dependent latent variables are consistent with theory.

Agent-Centric Representations for Multi-Agent Reinforcement Learning

This work studies two ways of incorporating an agent-centric inductive bias into the authors' RL algorithm, and evaluates these approaches on the Google Research Football environment as well as DeepMind Lab 2D.

Learning to Communicate with Deep Multi-Agent Reinforcement Learning

By embracing deep neural networks, this work is able to demonstrate end-to-end learning of protocols in complex environments inspired by communication riddles and multi-agent computer vision problems with partial observability.

Pretraining Representations for Data-Efficient Reinforcement Learning

This work uses unlabeled data to pretrain an encoder which is then finetuned on a small amount of task-specific data, and employs a combination of latent dynamics modelling and unsupervised goal-conditioned RL to encourage learning representations which capture diverse aspects of the underlying MDP.

Representation Learning with Contrastive Predictive Coding

This work proposes a universal unsupervised learning approach to extract useful representations from high-dimensional data, which it calls Contrastive Predictive Coding, and demonstrates that the approach is able to learn useful representations achieving strong performance on four distinct domains: speech, images, text and reinforcement learning in 3D environments.

Model-Based Reinforcement Learning for Atari

Simulated Policy Learning (SimPLe), a complete model-based deep RL algorithm based on video prediction models, is described and a comparison of several model architectures is presented, including a novel architecture that yields the best results in the authors' setting.

Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning

This work introduces Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning that performs on par or better than the current state of the art on both transfer and semi- supervised benchmarks.

Revisiting Self-Supervised Visual Representation Learning

This study revisits numerous previously proposed self-supervised models, conducts a thorough large scale study and uncovers multiple crucial insights about standard recipes for CNN design that do not always translate to self- supervised representation learning.

Shaping representations through communication: community size effect in artificial learning systems

This work introduces community-based autoencoders in which multiple encoders and decoders collectively learn representations by being randomly paired up on successive training iterations, finding that increasing community sizes reduce idiosyncrasies in the learned codes, resulting in representations that better encode concept categories and correlate with human feature norms.

Unsupervised Feature Learning and Deep Learning: A Review and New Perspectives

Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, manifold learning, anddeep learning.