• Publications
  • Influence
Combining Language and Vision with a Multimodal Skip-gram Model
TLDR
We extend the SKIP-GRAM model of Mikolov et al. (2013a), that constructs vector-based word representations by learning to predict the linguistic contexts in which target words occur in a corpus. Expand
  • 213
  • 24
  • PDF
A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning
TLDR
We propose an algorithm for general MARL, based on approximate best responses to mixtures of policies generated using deep reinforcement learning, and empirical game theoretic analysis to compute meta-strategies for policy selection. Expand
  • 252
  • 23
  • PDF
Emergence of Linguistic Communication from Referential Games with Symbolic and Pixel Input
TLDR
We extend previous work, in which agents were trained in symbolic environments, by developing agents which are able to learn from raw pixel data, a more challenging and realistic input representation. Expand
  • 98
  • 20
  • PDF
Multi-Agent Cooperation and the Emergence of (Natural) Language
TLDR
We propose a framework for language learning that relies on multi-agent communication. Expand
  • 224
  • 17
  • PDF
The LAMBADA dataset: Word prediction requiring a broad discourse context
TLDR
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. Expand
  • 97
  • 17
  • PDF
Compositional-ly Derived Representations of Morphologically Complex Words in Distributional Semantics
TLDR
We adapt compositional methods originally developed for phrases to the task of deriving the distributional meaning of morphologically complex words from their parts. Expand
  • 95
  • 14
  • PDF
Hubness and Pollution: Delving into Cross-Space Mapping for Zero-Shot Learning
TLDR
In this paper, we explore some general properties, both theoretical and empirical, of the cross-space mapping function, and we build on them to propose better methods to estimate it. Expand
  • 154
  • 13
  • PDF
Jointly optimizing word representations for lexical and sentential tasks with the C-PHRASE model
TLDR
We introduce C-PHRASE, a distributional semantic model that learns word representations by optimizing context prediction for phrases at all levels in a syntactic tree, from single words to full sentences. Expand
  • 46
  • 12
  • PDF
Is this a wampimuk? Cross-modal mapping between distributional semantics and the visual world
TLDR
We present a simple approach to cross-modal vector-based semantics for the task of zero-shot learning, in which an image of a previously unseen object is mapped to a linguistic representation denoting its word. Expand
  • 92
  • 11
  • PDF
Multimodal Word Meaning Induction From Minimal Exposure to Natural Text.
TLDR
We investigate whether minimal distributional evidence from very short passages suffices to trigger successful word learning in subjects, testing their linguistic and visual intuitions about the concepts associated with new words. Expand
  • 44
  • 11
...
1
2
3
4
5
...