• Publications
  • Influence
Reasoning about Entailment with Neural Attention
TLDR
This paper proposes a neural model that reads two sentences to determine entailment using long short-term memory units and extends this model with a word-by-word neural attention mechanism that encourages reasoning over entailments of pairs of words and phrases, and presents a qualitative analysis of attention weights produced by this model. Expand
Language Models as Knowledge Bases?
TLDR
An in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-the-art pretrained language models finds that BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge. Expand
Stance Detection with Bidirectional Conditional Encoding
Stance detection is the task of classifying the attitude expressed in a text towards a target such as Hillary Clinton to be "positive", negative" or "neutral". Previous work has assumed that eitherExpand
e-SNLI: Natural Language Inference with Natural Language Explanations
TLDR
The Stanford Natural Language Inference dataset is extended with an additional layer of human-annotated natural language explanations of the entailment relations, which can be used for various goals, such as obtaining full sentence justifications of a model’s decisions, improving universal sentence representations and transferring to out-of-domain NLI datasets. Expand
End-to-end Differentiable Proving
TLDR
It is demonstrated that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules. Expand
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
TLDR
A general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) -- models which combine pre-trained parametric and non-parametric memory for language generation, and finds that RAG models generate more specific, diverse and factual language than a state-of-the-art parametric-only seq2seq baseline. Expand
ChemSpot: a hybrid system for chemical named entity recognition
TLDR
ChemSpot, a named entity recognition (NER) tool for identifying mentions of chemicals in natural language texts, including trivial names, drugs, abbreviations, molecular formulas and International Union of Pure and Applied Chemistry entities is presented. Expand
Interpretation of Natural Language Rules in Conversational Machine Reading
TLDR
This paper formalise this task and develops a crowd-sourcing strategy to collect 37k task instances based on real-world rules and crowd-generated questions and scenarios to assess its difficulty by evaluating the performance of rule-based and machine-learning baselines. Expand
Frustratingly Short Attention Spans in Neural Language Modeling
TLDR
This paper proposes a neural language model with a key-value attention mechanism that outputs separate representations for the key and value of a differentiable memory, as well as for encoding the next-word distribution that outperforms existing memory-augmented neural language models on two corpora. Expand
DiCE: The Infinitely Differentiable Monte-Carlo Estimator
TLDR
DiCE is introduced, which provides a single objective that can be differentiated repeatedly, generating correct gradient estimators of any order in SCGs, and is used to propose and evaluate a novel approach for multi-agent learning. Expand
...
1
2
3
4
5
...