MonaLog: a Lightweight System for Natural Language Inference Based on Monotonicity

@article{Hu2020MonaLogAL,
  title={MonaLog: a Lightweight System for Natural Language Inference Based on Monotonicity},
  author={Hai Hu and Qi Chen and Kyle Richardson and Atreyee Mukherjee and Lawrence S. Moss and Sandra K{\"u}bler},
  journal={ArXiv},
  year={2020},
  volume={abs/1910.08772}
}
We present a new logic-based inference engine for natural language inference (NLI) called MonaLog, which is based on natural logic and the monotonicity calculus. In contrast to existing logic-based approaches, our system is intentionally designed to be as lightweight as possible, and operates using a small set of well-known (surface-level) monotonicity facts about quantifiers, lexical items and tokenlevel polarity information. Despite its simplicity, we find our approach to be competitive with… 

Figures and Tables from this paper

Supporting Context Monotonicity Abstractions in Neural NLI Models

TLDR
This work reframe the problem of context monotonicity classification to make it compatible with transformer-based pre-trained NLI models and adds this task to the training pipeline, and introduces a sound and complete simplifiedmonotonicity logic formalism which describes the treatment of contexts as abstract units.

Flexible Operations for Natural Language Deduction

TLDR
This paper uses a BART-based model to generate the result of applying a particular logical operation to one or more premise statements, and has a largely automated pipeline for scraping and constructing suitable training examples from Wikipedia, which are then paraphrased to give the models the ability to handle lexical variation.

Flexible Generation of Natural Language Deductions

TLDR
ParaPattern is described, a method for building models to generate deductive inferences from diverse natural language inputs without direct human supervision that achieves 85% validity on examples of the ‘substitution’ operation from EntailmentBank without the use of any in-domain training data.

Learning as Abduction: Trainable Natural Logic Theorem Prover for Natural Language Inference

TLDR
This work models learning from data as abduction by reversing a theorem-proving procedure to abduce semantic relations that serve as the best explanation for the gold label of an inference problem, and implements the learning method in a tableau theorem prover for natural language.

Probing Natural Language Inference Models through Semantic Fragments

TLDR
This work proposes the use of semantic fragments—systematically generated datasets that each target a different semantic phenomenon—for probing, and efficiently improving, such capabilities of linguistic models.

NeuralLog: Natural Language Inference with Joint Neural and Logical Reasoning

TLDR
This work proposes an inference framework called NeuralLog, which utilizes both a monotonicity-based logical inference engine and a neural network language model for phrase alignment, and shows that the joint logic and neural inference system improves accuracy on the NLI task and can achieve state-of-art accuracy onThe SICK and MED datasets.

Neural Natural Language Inference Models Partially Embed Theories of Lexical Entailment and Negation

TLDR
It is found that models trained on general-purpose NLI datasets fail systematically on MoNLI examples containing negation, but that MoNNI fine-tuning addresses this failure, suggesting that the BERT model at least partially embeds a theory of lexical entailment and negation at an algorithmic level.

Natural Language Inference using Neural Network and Tableau Method

TLDR
This paper proposes a method to integrate a neural NLI model and a tableau proof system, the latter of which explains the reasoning processes of this natural language inference task.

Logical Inference for Counting on Semi-structured Tables

TLDR
This work proposes a logical inference system for reasoning between semi-structured tables and texts and shows that this system can more robustly perform inference between Tables and texts that requires numerical understanding compared with current neural approaches.

Probing Linguistic Information For Logical Inference In Pre-trained Language Models

TLDR
This work proposes a methodology for probing knowledge for inference that logical systems require but often lack in pre-trained language model representations, and demonstrates language models' potential as semantic and background knowledge bases for supporting symbolic inference methods.

References

SHOWING 1-10 OF 41 REFERENCES

Modeling Semantic Containment and Exclusion in Natural Language Inference

TLDR
This work proposes an approach to natural language inference based on a model of natural logic, which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation, to incorporate both semantic exclusion and implicativity.

Natural language inference

TLDR
This dissertation explores a range of approaches to NLI, beginning with methods which are robust but approximate, and proceeding to progressively more precise approaches, and greatly extends past work in natural logic to incorporate both semantic exclusion and implicativity.

An extended model of natural logic

TLDR
A model of natural language inference which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation is proposed, extending past work in natural logic by incorporating both semantic exclusion and implicativity.

Probing Natural Language Inference Models through Semantic Fragments

TLDR
This work proposes the use of semantic fragments—systematically generated datasets that each target a different semantic phenomenon—for probing, and efficiently improving, such capabilities of linguistic models.

A Tableau Prover for Natural Logic and Language

TLDR
A theorem prover for Natural Logic, a logic whose terms resemble natural language expressions based on an analytic tableau method and employs syntactically and semantically motivated schematic rules is designed.

A large annotated corpus for learning natural language inference

TLDR
The Stanford Natural Language Inference corpus is introduced, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning, which allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time.

Natural Language Inference with Monotonicity

TLDR
This paper describes a working system which performs natural language inference using polarity-marked parse trees, and the kind of inference performed is essentially “logical”, but it goes beyond what is representable in first-order logic.

Recognising Textual Entailment with Logical Inference

TLDR
This work incorporates model building, a technique borrowed from automated reasoning, and shows that it is a useful robust method to approximate entailment, and uses machine learning to combine these deep semantic analysis techniques with simple shallow word overlap.

LangPro: Natural Language Theorem Prover

TLDR
LangPro is an automated theorem prover for natural language that is able to prove semantic relations between them given a set of premises and a hypothesis, and achieves high results comparable to state-of-the-art.

Representing Meaning with a Combination of Logical and Distributional Models

TLDR
This article adopts a hybrid approach that combines logical and distributional semantics using probabilistic logic, specifically Markov Logic Networks and releases a lexical entailment data set of 10,213 rules extracted from the SICK data set, which is a valuable resource for evaluating lexical entailsment systems.