Socially Supervised Representation Learning: The Role of Subjectivity in Learning Efficient Representations
@inproceedings{Taylor2021SociallySR, title={Socially Supervised Representation Learning: The Role of Subjectivity in Learning Efficient Representations}, author={Julius Taylor and Eleni Nisioti and Cl{\'e}ment Moulin-Frier}, booktitle={Adaptive Agents and Multi-Agent Systems}, year={2021} }
Despite its rise as a prominent solution to the data inefficiency of today’s machine learning models, self-supervised learning has yet to be studied from a purely multi-agent perspective. In this work, we propose that aligning internal subjective representations, which naturally arise in a multi-agent setup where agents receive partial observations of the same underlying environmental state, can lead to more data-efficient representations. We propose that multi-agent environments, where agents…
References
SHOWING 1-10 OF 27 REFERENCES
Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style
- Computer ScienceNeurIPS
- 2021
Causal3DIdent, a dataset of high-dimensional, visually complex images with rich causal dependencies, which is used to study the effect of data augmentations performed in practice, and numerical simulations with dependent latent variables are consistent with theory.
Agent-Centric Representations for Multi-Agent Reinforcement Learning
- Computer ScienceArXiv
- 2021
This work studies two ways of incorporating an agent-centric inductive bias into the authors' RL algorithm, and evaluates these approaches on the Google Research Football environment as well as DeepMind Lab 2D.
Learning to Communicate with Deep Multi-Agent Reinforcement Learning
- Computer ScienceNIPS
- 2016
By embracing deep neural networks, this work is able to demonstrate end-to-end learning of protocols in complex environments inspired by communication riddles and multi-agent computer vision problems with partial observability.
Pretraining Representations for Data-Efficient Reinforcement Learning
- Computer ScienceNeurIPS
- 2021
This work uses unlabeled data to pretrain an encoder which is then finetuned on a small amount of task-specific data, and employs a combination of latent dynamics modelling and unsupervised goal-conditioned RL to encourage learning representations which capture diverse aspects of the underlying MDP.
Representation Learning with Contrastive Predictive Coding
- Computer ScienceArXiv
- 2018
This work proposes a universal unsupervised learning approach to extract useful representations from high-dimensional data, which it calls Contrastive Predictive Coding, and demonstrates that the approach is able to learn useful representations achieving strong performance on four distinct domains: speech, images, text and reinforcement learning in 3D environments.
Model-Based Reinforcement Learning for Atari
- Computer ScienceICLR
- 2020
Simulated Policy Learning (SimPLe), a complete model-based deep RL algorithm based on video prediction models, is described and a comparison of several model architectures is presented, including a novel architecture that yields the best results in the authors' setting.
Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning
- Computer ScienceNeurIPS
- 2020
This work introduces Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning that performs on par or better than the current state of the art on both transfer and semi- supervised benchmarks.
Revisiting Self-Supervised Visual Representation Learning
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
This study revisits numerous previously proposed self-supervised models, conducts a thorough large scale study and uncovers multiple crucial insights about standard recipes for CNN design that do not always translate to self- supervised representation learning.
Shaping representations through communication: community size effect in artificial learning systems
- Computer ScienceArXiv
- 2019
This work introduces community-based autoencoders in which multiple encoders and decoders collectively learn representations by being randomly paired up on successive training iterations, finding that increasing community sizes reduce idiosyncrasies in the learned codes, resulting in representations that better encode concept categories and correlate with human feature norms.
Unsupervised Feature Learning and Deep Learning: A Review and New Perspectives
- Computer ScienceArXiv
- 2012
Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, manifold learning, anddeep learning.