• Publications
  • Influence
Improving Hypernymy Detection with an Integrated Path-based and Distributional Method
TLDR
An improved path-based algorithm is suggested, in which the dependency paths are encoded using a recurrent neural network, that achieves results comparable to distributional methods.
Breaking NLI Systems with Sentences that Require Simple Lexical Inferences
TLDR
A new NLI test set is created that shows the deficiency of state-of-the-art models in inferences that require lexical and world knowledge, demonstrating that these systems are limited in their generalization ability.
Hypernyms under Siege: Linguistically-motivated Artillery for Hypernymy Detection
TLDR
Comparison to the state-of-the-art supervised methods shows that while supervised methods generally outperform the unsupervised ones, the former are sensitive to the distribution of training instances, hurting their reliability.
Unsupervised Commonsense Question Answering with Self-Talk
TLDR
An unsupervised framework based on self-talk as a novel alternative to multiple-choice commonsense tasks, inspired by inquiry-based discovery learning, which improves performance on several benchmarks and competes with models that obtain knowledge from external KBs.
Revisiting Joint Modeling of Cross-document Entity and Event Coreference Resolution
TLDR
This work jointly model entity and event coreference, and proposes a neural architecture for cross-document coreference resolution using its lexical span, surrounding context, and relation to entity (event) mentions via predicate-arguments structures.
Path-based vs. Distributional Information in Recognizing Lexical Semantic Relations
TLDR
An integrated neural method for hypernymy detection is extended and it is shown that the path-based information source always contributes to the classification, and the cases in which it mostly complements the distributional information.
SemEval-2018 Task 9: Hypernym Discovery
TLDR
This paper put forward this task as a complementary benchmark for modeling hypernymy, a problem which has traditionally been cast as a binary classification task, taking a pair of candidate words as input.
Social Chemistry 101: Learning to Reason about Social and Moral Norms
TLDR
A new conceptual formalism to study people's everyday social norms and moral judgments over a rich spectrum of real life situations described in natural language and a model framework, Neural Norm Transformer, learns and generalizes Social-Chem-101 to successfully reason about previously unseen situations, generating relevant (and potentially novel) attribute-aware social rules-of-thumb.
Paraphrasing vs Coreferring: Two Sides of the Same Coin
TLDR
This work used annotations from an event coreference dataset as distant supervision to re-score heuristically-extracted predicate paraphrases, and used the same re-ranking features as additional inputs to a state-of-the-art eventcoreference resolution model, which yielded modest but consistent improvements to the model’s performance.
Thinking Like a Skeptic: Defeasible Inference in Natural Language
TLDR
From Defeasible NLI, both a classification and generation task for defeasible inference are developed, and it is demonstrated that the generation task is much more challenging.
...
1
2
3
4
...