• Publications
  • Influence
context2vec: Learning Generic Context Embedding with Bidirectional LSTM
Context representations are central to various NLP tasks, such as word sense disambiguation, named entity recognition, coreference resolution, and many more. In this work we present a neural modelExpand
  • 253
  • 34
  • Open Access
A Simple Word Embedding Model for Lexical Substitution
The lexical substitution task requires identifying meaning-preserving substitutes for a target word instance in a given sentential context. Since its introduction in SemEval-2007, various modelsExpand
  • 85
  • 22
  • Open Access
Modeling Word Meaning in Context with Substitute Vectors
Context representations are a key element in distributional models of word meaning. In contrast to typical representations based on neighboring words, a recently proposed approach suggests toExpand
  • 37
  • 11
  • Open Access
The Role of Context Types and Dimensionality in Learning Word Embeddings
We provide the first extensive evaluation of how using different types of context to learn skip-gram word embeddings affects performance on a wide range of intrinsic and extrinsic NLP tasks. OurExpand
  • 93
  • 10
  • Open Access
Dotted interval graphs and high throughput genotyping
We introduce a generalization of interval graphs, which we call <i>dotted interval graphs (DIG).</i> A dotted interval graph is an intersection graph of arithmetic progressions (=<i>dottedExpand
  • 13
  • 3
  • Open Access
A Two Level Model for Context Sensitive Inference Rules
Automatic acquisition of inference rules for predicates has been commonly addressed by computing distributional similarity between vectors of argument words, operating at the word space level. AExpand
  • 20
  • 2
  • Open Access
Automatic Generation of Challenging Distractors Using Context-Sensitive Inference Rules
Automatically generating challenging distractors for multiple-choice gap-fill items is still an unsolved problem. We propose to employ context-sensitive lexical inference rules in order to generateExpand
  • 26
  • 1
  • Open Access
Probabilistic Modeling of Joint-context in Distributional Similarity
Most traditional distributional similarity models fail to capture syntagmatic patterns that group together multiple word features within the same joint context. In this work we introduce a novelExpand
  • 13
  • 1
  • Open Access
A Simple Language Model based on PMI Matrix Approximations
In this study, we introduce a new approach for learning language models by training them to estimate word-context pointwise mutual information (PMI), and then deriving the desired conditionalExpand
  • 9
  • 1
  • Open Access
Using Lexical Expansion to Learn Inference Rules from Sparse Data
Automatic acquisition of inference rules for predicates is widely addressed by computing distributional similarity scores between vectors of argument words. In this scheme, prior work typicallyExpand
  • 5
  • 1
  • Open Access