• Publications
  • Influence
Composition in Distributional Models of Semantics
TLDR
Vector-based models of word meaning have become increasingly popular in cognitive science. Expand
  • 858
  • 92
  • PDF
Vector-based Models of Semantic Composition
TLDR
This paper proposes a framework for representing the meaning of phrases and sentences in vector space. Expand
  • 682
  • 78
  • PDF
Language Models Based on Semantic Composition
TLDR
We propose a novel statistical language model to capture long-range semantic dependencies. Expand
  • 83
  • 9
  • PDF
UCL Machine Reading Group: Four Factor Framework For Fact Finding (HexaF)
TLDR
In this paper we describe our 2 place FEVER shared-task system that achieved a FEVER score of 62.52% on the provisional test set (without additional human evaluation), and 65.41% onthe development set. Expand
  • 38
  • 8
  • PDF
Behavior Analysis of NLI Models: Uncovering the Influence of Three Factors on Robustness
TLDR
We identify three factors - insensitivity, polarity and unseen pairs - and compare their impact on three SNLI models under a variety of conditions. Expand
  • 19
  • 3
  • PDF
Syntactic and Semantic Factors in Processing Difficulty: An Integrated Measure
TLDR
In this paper we analyze reading times in terms of a single predictive measure which integrates a model of semantic composition with an incremental parser and a language model. Expand
  • 51
  • 1
  • PDF
Automated Fact Checking in the News Room
TLDR
We present an automated fact checking platform which given a claim, it retrieves relevant textual evidence from a document collection, predicts whether each piece of evidence supports or refutes the claim, and returns a final verdict. Expand
  • 11
  • 1
  • PDF
The SUMMA Platform Prototype
TLDR
We present the first prototype of the SUMMA Platform: an integrated platform for multilingual media monitoring. Expand
  • 4
  • 1
  • PDF
Parser Adaptation to the Biomedical Domain without Re-Training
TLDR
We present a distributional approach to the problem of inducing parameters for unseen words in probabilistic parsers using the information about syntactic structure that is implicit in raw text. Expand
  • 2
  • 1
  • PDF
Extrapolation in NLP
TLDR
We argue that extrapolation to examples outside the training space will often be easier for models that capture global structures, rather than just maximise their local fit to the training data. Expand
  • 12
  • PDF