• Corpus ID: 240070458

Learning to Ground Multi-Agent Communication with Autoencoders

@article{Lin2021LearningTG,
  title={Learning to Ground Multi-Agent Communication with Autoencoders},
  author={Toru Lin and Minyoung Huh and C. Stauffer and Ser Nam Lim and Phillip Isola},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.15349}
}
Communication requires having a common language, a lingua franca, between agents. This language could emerge via a consensus process, but it may require many generations of trial and error. Alternatively, the lingua franca can be given by the environment, where agents ground their language in representations of the observed world. We demonstrate a simple way to ground language in learned representations, which facilitates decentralized multi-agent communication and coordination. We find that a… 

Figures and Tables from this paper

Learning to Ground Decentralized Multi-Agent Communication with Contrastive Learning

This work introduces an alternative perspective to the communicative messages sent between agents, considering them as different incomplete views of the environment state, and proposes a simple approach to induce the emergence of a common language by maximizing the mutual information between messages of a given trajectory in a self-supervised manner.

Intent-Grounded Compositional Communication through Mutual Information in Multi-Agent Teams

Information theory is used to introduce information rich, variational compositional communication to adequately embed referential information and to provide a contrastive objective to ground communication in intent-specific features.

Towards Human-Agent Communication via the Information Bottleneck Principle

Fundamental principles that are believed to characterize human language evolution may inform emergent communication in artificial agents are demonstrated and it is demonstrated that VQ-VIB outperforms other discrete communication methods.

An Analysis of Discretization Methods for Communication Learning with Multi-Agent Reinforcement Learning

This paper compares several state-of-the-art discretization methods as well as two methods that have not been used for communication learning before and shows that none of the methods is best in all environments.

Goal Consistency: An Effective Multi-Agent Cooperative Method for Multistage Tasks

Experimental results show that MAGIC significantly improves sample efficiency and facilitates cooperation among agents compared with state-of-art MARL algorithms in several challenging multistage tasks.

A Survey of Adaptive Multi-Agent Networks and Their Applications in Smart Cities

The existing techniques presented in the literature that can be utilized for implementing adaptive multi-agent networks in smart cities and insights and directions for future research in this domain are presented.

Towards True Lossless Sparse Communication in Multi-Agent Systems

This paper proposes a method for true lossless sparsity in communication via Information Maximizing Gated Sparse Multi-Agent Communication (IMGS-MAC), which naturally enables lossless sparse communication at lower budgets than prior art.

References

SHOWING 1-10 OF 55 REFERENCES

Biases for Emergent Communication in Multi-agent Reinforcement Learning

This work introduces inductive biases for positive signalling and positive listening, which ease the learning problem in emergent communication and applies these methods to a more extended environment, showing that agents with these inductive bias achieve better performance.

Emergence of Communication in an Interactive World with Consistent Speakers

A new model and training algorithm is proposed, that utilizes the structure of a learned representation space to produce more consistent speakers at the initial phases of training, which stabilizes learning and increases context-independence compared to policy gradient and other competitive baselines.

Emergent Linguistic Phenomena in Multi-Agent Communication Games

It is concluded that intricate properties of language evolution need not depend on complex evolved linguistic capabilities, but can emerge from simple social exchanges between perceptually-enabled agents playing communication games.

Learning to Communicate with Deep Multi-Agent Reinforcement Learning

By embracing deep neural networks, this work is able to demonstrate end-to-end learning of protocols in complex environments inspired by communication riddles and multi-agent computer vision problems with partial observability.

TarMAC: Targeted Multi-Agent Communication

This work proposes a targeted communication architecture for multi-agent reinforcement learning, where agents learn both what messages to send and whom to address them to while performing cooperative tasks in partially-observable environments, and augment this with a multi-round communication approach.

Emergence of Linguistic Communication from Referential Games with Symbolic and Pixel Input

It is found that the degree of structure found in the input data affects the nature of the emerged protocols, and thereby corroborate the hypothesis that structured compositional language is most likely to emerge when agents perceive the world as being structured.

Learning Latent Representations to Influence Multi-Agent Interaction

This work proposes a reinforcement learning-based framework for learning latent representations of an agent's policy, where the ego agent identifies the relationship between its behavior and the other agent's future strategy and leverages these latent dynamics to influence the otherAgent, purposely guiding them towards policies suitable for co-adaptation.

Emergent Communication in a Multi-Modal, Multi-Step Referential Game

A novel multi-modal, multi-step referential game, where the sender and receiver have access to distinct modalities of an object, and their information exchange is bidirectional and of arbitrary duration is proposed.

Learning to Communicate to Solve Riddles with Deep Distributed Recurrent Q-Networks

Empirical results on two multi-agent learning problems based on well-known riddles are presented, demonstrating that DDRQN can successfully solve such tasks and discover elegant communication protocols to do so, the first time deep reinforcement learning has succeeded in learning communication protocols.

Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments

An adaptation of actor-critic methods that considers action policies of other agents and is able to successfully learn policies that require complex multi-agent coordination is presented.
...