• Publications
  • Influence
Actor-Attention-Critic for Multi-Agent Reinforcement Learning
TLDR
This work presents an actor-critic algorithm that trains decentralized policies in multi-agent settings, using centrally computed critics that share an attention mechanism which selects relevant information for each agent at every timestep, which enables more effective and scalable learning in complex multi- agent environments, when compared to recent approaches. Expand
Wearable Eye-tracking for Research: Automated dynamic gaze mapping and accuracy/precision comparisons across devices
TLDR
An automated analysis pipeline for mapping gaze data from an egocentric coordinate system to a fixed reference coordinate system allows researchers to study aggregate viewing behavior on a 2D planar target stimulus without restricting the mobility of participants. Expand
Coordinated Exploration via Intrinsic Rewards for Multi-Agent Reinforcement Learning
TLDR
It is argued that exploration in cooperative multi-agent settings can be accelerated and improved if agents coordinate with respect to the regions of the state space they explore if the agents can coordinate their exploration and maximize extrinsic returns. Expand
A Goal-Based Movement Model for Continuous Multi-Agent Tasks
Despite increasing attention paid to the need for fast, scalable methods to analyze next-generation neuroscience data, comparatively little attention has been paid to the development of similarExpand
Randomized Entity-wise Factorization for Multi-Agent Reinforcement Learning
TLDR
This work proposes to utilize value function factoring with random subsets of entities in each factor as an auxiliary objective in order to disentangle value predictions from irrelevant entities and suggests that such an approach helps agents learn more effectively in multi-agent settings by discovering common trajectories across episodes within sub-groups of agents/entities. Expand
AI-QMIX: Attention and Imagination for Dynamic Multi-Agent Reinforcement Learning
TLDR
This work proposes a method that can learn sub-group relationships and how they can be combined, ultimately improving knowledge sharing and generalization across scenarios and extends QMIX for dynamic MARL in two ways. Expand
When MAML Can Adapt Fast and How to Assist When It Cannot
TLDR
This work finds MAML adapts better with a deep architecture even if the tasks need only a shallow one (and thus, no representation learning is needed), and also finds that upper layers enable fast adaptation by being meta-learned to perform adaptive gradient update when generalizing to new tasks. Expand
Toward Sim-to-Real Directional Semantic Grasping
TLDR
This work addresses the problem of directional semantic grasping using a double deep Q-network that learns to map downsampled RGB input images from a wrist-mounted camera to Q-values, which are then translated into Cartesian robot control commands via the cross-entropy method (CEM). Expand
Decoupling Adaptation from Modeling with Meta-Optimizers for Meta Learning
TLDR
This work begins with an experimental analysis of MAML, finding that deep models are crucial for its success, even given sets of simple tasks where a linear model would suffice on any individual task. Expand
...
1
2
...