Smoothing Entailment Graphs with Language Models

  title={Smoothing Entailment Graphs with Language Models},
  author={Nick McKenna and Mark Steedman},
The diversity and Zipfian frequency distribution of natural language predicates in corpora leads to sparsity when learning Entailment Graphs. As symbolic models for natural language inference, an EG cannot re-cover if missing a novel premise or hypothesis at test-time. In this paper we approach the problem of vertex sparsity by introduc-ing a new method of graph smoothing, using a Language Model to find the nearest approximations of missing predicates. We improve recall by 25.1 and 16.3 absolute… 

Figures and Tables from this paper



Learning Typed Entailment Graphs with Global Soft Constraints

This paper presents a scalable method that learns globally consistent similarity scores based on new soft constraints that consider both the structures across typed entailment graphs and inside each graph, and shows large improvements over local similarity scores on two entailment data sets.

Multivalent Entailment Graphs for Question Answering

It is shown that directional entailment is more helpful for inference than non-directional similarity on questions of fine-grained semantics and that drawing on evidence across valencies answers more questions than by using only the same valency evidence.

Efficient Global Learning of Entailment Graphs

This article presents methods for learning transitive graphs that contain tens of thousands of nodes, where nodes represent predicates and edges correspond to entailment rules (termed entailment graphs), and demonstrates that these methods for the first time scale to large graphs containing 20,000 nodes and more than 100,000 edges.

Cross-lingual Inference with A Chinese Entailment Graph

This paper presents the first pipeline for building Chinese entailment graphs, which involves a novel high-recall open relation extraction (ORE) method and the first Chinese fine-grained entity typing dataset under the FIGER type ontology.

Entailment Graph Learning with Textual Entailment and Soft Transitivity

Typed entailment graphs try to learn the entailment relations between predicates from text and model them as edges between predicate nodes. The construction of entailment graphs usually suffers from

Duality of Link Prediction and Entailment Graph Induction

This paper proposes an entailment score that exploits the new facts discovered by the link prediction model, and then form entailment graphs between relations, which are then used to predict improved link prediction scores.

Combined Distributional and Logical Semantics

This work introduces a new approach to semantics which combines the benefits of distributional and formal logical semantics, and outperform a variety of existing approaches on a wide-coverage question answering task, and demonstrates the ability to make complex multi-sentence inferences involving quantifiers on the FraCaS suite.

Distributional Inclusion Hypothesis for Tensor-based Composition

This paper focuses on inclusion properties of tensors; its main contribution is a theoretical and experimental analysis of how feature inclusion works in different concrete models of verb tensors.

Global Learning of Focused Entailment Graphs

A graph structure over predicates is defined that represents entailment relations as directed edges, and a global transitivity constraint on the graph is used to learn the optimal set of edges, by formulating the optimization problem as an Integer Linear Program.

Deep Contextualized Word Representations

A new type of deep contextualized word representation is introduced that models both complex characteristics of word use and how these uses vary across linguistic contexts, allowing downstream models to mix different types of semi-supervision signals.