• Publications
  • Influence
The PASCAL Recognising Textual Entailment Challenge
This paper presents the Third PASCAL Recognising Textual Entailment Challenge (RTE-3), providing an overview of the dataset creating methodology and the submitted systems. Expand
Improving Distributional Similarity with Lessons Learned from Word Embeddings
We reveal that much of the performance gains of word embeddings are due to certain system design choices and hyperparameter optimizations, rather than the embedding algorithms themselves. Expand
The Sixth PASCAL Recognizing Textual Entailment Challenge
This paper presents the Fifth Recognizing Textual Entailment Challenge (RTE5). Expand
Improving Hypernymy Detection with an Integrated Path-based and Distributional Method
We present HypeNET, an integrated path-based and distributional method for hypernymy detection, which achieves results comparable to distributional methods. Expand
context2vec: Learning Generic Context Embedding with Bidirectional LSTM
We present a neural model for efficiently learning a generic context embedding function from large corpora, using bidirectional LSTM. Expand
The Second PASCAL Recognising Textual Entailment Challenge
This paper describes the Second PASCAL Recognising Textual Entailment Challenge (RTE-2). 1 We describe the RTE2 dataset and overview the submissions for the challenge. One of the main goals for thisExpand
Directional distributional similarity for lexical inference
We identify desired properties of directional (asymmetric) similarity measures for lexical inference, specify a particular measure based on Average Precision that addresses these properties. Expand
Supervised Open Information Extraction
We present data and methods that enable a supervised learning approach to Open Information Extraction (Open IE). Expand
Do Supervised Distributional Methods Really Learn Lexical Inference Relations?
Distributional representations of words have been recently used in supervised settings for recognizing lexical inference relations between word pairs, such as hypernymy and entailment, and show that they do not actually learn a relation between two words. Expand