• Corpus ID: 209386673

Artificial Agents Learn Flexible Visual Representations by Playing a Hiding Game

  title={Artificial Agents Learn Flexible Visual Representations by Playing a Hiding Game},
  author={Luca Weihs and Aniruddha Kembhavi and Kiana Ehsani and Sarah Pratt and Winson Han and Alvaro Herrasti and Eric Kolve and Dustin Schwenk and Roozbeh Mottaghi and Ali Farhadi},
The ubiquity of embodied gameplay, observed in a wide variety of animal species including turtles and ravens, has led researchers to question what advantages play provides to the animals engaged in it. Mounting evidence suggests that play is critical in developing the neural flexibility for creative problem solving, socialization, and can improve the plasticity of the medial prefrontal cortex. Comparatively little is known regarding the impact of gameplay upon embodied artificial agents. While… 

Figures and Tables from this paper

AllenAct: A Framework for Embodied AI Research

AllenAct is introduced, a modular and flexible learning framework designed with a focus on the unique requirements of Embodied AI research that provides first-class support for a growing collection of embodied environments, tasks and algorithms.

Visual Perspective Taking for Opponent Behavior Modeling

It is suggested that visual behavior modeling and perspective taking skills will play a critical role in the ability of physical robots to fully integrate into real-world multi-agent activities.

SMiRL: Surprise Minimizing Reinforcement Learning in Unstable Environments

This work proposes that such a struggle to achieve and preserve order might offer a principle for the emergence of useful behaviors in artificial agents, and formalizes this idea into an unsupervised reinforcement learning method called surprise minimizing reinforcement learning (SMiRL).

Audio-Visual Embodied Navigation

This work develops a multi-modal deep reinforcement learning pipeline to train navigation policies end-to-end from a stream of egocentric audio-visual observations, allowing the agent to discover elements of the geometry of the physical space indicated by the reverberating audio and detect and follow sound-emitting targets.

What Can You Learn from Your Muscles? Learning Visual Representation from Human Interactions

Experiments show that the self-supervised representation that encodes interaction and attention cues outperforms a visual-only state-of-the-art method MoCo on a variety of target tasks: scene classification (semantic), action recognition (temporal), depth estimation (geometric), dynamics prediction (physics) and walkable surface estimation (affordance).

Observation of the Evolution of Hide and Seek AI-Final Report

The purpose of this project is to observe the evolution of two artificial agents, a ‘Seeker’ and a ’Hider’, as they play a simplified version of the game Hide and Seek, to achieve a greater understanding of the development of machine learning AI searching and hiding patterns.

A Cordial Sync: Going Beyond Marginal Policies for Multi-Agent Embodied Tasks

The novel task FurnMove is introduced, in which agents work together to move a piece of furniture through a living room to a goal, and SYNC-policies (synchronize your actions coherently) and CORDIAL (coordination loss) are introduced.

Multi-Agent Embodied Visual Semantic Navigation With Scene Prior Knowledge

A hierarchical decision framework based on semantic mapping, scene prior knowledge, and communication mechanism to solve visual semantic navigation, a challenging task that requires agents to learn reasonable collaboration strategies to perform efficient exploration under the restrictions of communication bandwidth is developed.

Computer Vision – ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V

BLSM first sets bone lengths and joint angles to specify the skeleton, then specifies identity-specific surface variation, and finally bundles them together through linear blend skinning, allowing for out-of-box integration with standard graphics packages like Unity, facilitating full-body AR effects and image-driven character animation.



Human-level control through deep reinforcement learning

This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

Visual Hide and Seek

This work trains embodied agents to play Visual Hide and Seek where a prey must navigate in a simulated environment in order to avoid capture from a predator and quantitatively analyzes how agent weaknesses, such as slower speed, effect the learned policy.

Children and robots learning to play hide and seek

It is proposed that children are able to learn how to play hide and seek by learning the features and relations of objects and use that information to play a credible game ofHide and seek.

Mastering the game of Go without human knowledge

An algorithm based solely on reinforcement learning is introduced, without human data, guidance or domain knowledge beyond game rules, that achieves superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.

Curiosity-Driven Exploration by Self-Supervised Prediction

This work forms curiosity as the error in an agent's ability to predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model, which scales to high-dimensional continuous state spaces like images, bypasses the difficulties of directly predicting pixels, and ignores the aspects of the environment that cannot affect the agent.

Hindsight Experience Replay

A novel technique is presented which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering and may be seen as a form of implicit curriculum.

The Developmental Progression of Understanding of Mind during a Hiding Game.

Observing preschoolers engaged in a playful hiding game revealed that children's understanding of mind not only increased with age but also developed sequentially, which suggests that mothers may tailor the content of their utterances to the child's growing expertise.

Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm

This paper generalises the approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains, and convincingly defeated a world-champion program in each case.

Games and the Development of Perspective Taking

It is widely acknowledged that perspective taking is fundamental to the development of the self, the development of the individual’s ability to interact meaningfully with other people, and to the