Corpus ID: 4301951

Textual Entailment with Structured Attentions and Composition

@inproceedings{Zhao2016TextualEW,
  title={Textual Entailment with Structured Attentions and Composition},
  author={Kai Zhao and Liang Huang and M. Ma},
  booktitle={COLING},
  year={2016}
}
Deep learning techniques are increasingly popular in the textual entailment task, overcoming the fragility of traditional discrete models with hard alignments and logics. In particular, the recently proposed attention models (Rocktäschel et al., 2015; Wang and Jiang, 2015) achieves state-of-the-art accuracy by computing soft word alignments between the premise and hypothesis sentences. However, there remains a major limitation: this line of work completely ignores syntax and recursion, which is… Expand
SciTaiL: A Textual Entailment Dataset from Science Question Answering
TLDR
A new dataset and model for textual entailment, derived from treating multiple-choice question-answering as an entailment problem, is presented, and it is demonstrated that one can improve accuracy on SCITAIL by 5% using a new neural model that exploits linguistic structure. Expand
Incorporating Contextual and Syntactic Structures Improves Semantic Similarity Modeling
TLDR
Extensive evaluations on eight benchmark datasets show that incorporating structural information contributes to consistent improvements over strong baselines in semantic mod-eling, and is one of the best models according to a recent reproducibility study. Expand
Inducing Alignment Structure with Gated Graph Attention Networks for Sentence Matching
TLDR
Experimental results demonstrate that the proposed graph-based approach for sentence matching substantially achieves state-of-the-art performance on two datasets across tasks of natural language and paraphrase identification, and discussions show that the model can learn meaningful graph structure, indicating its superiority on improved interpretability. Expand
DR-BiLSTM: Dependent Reading Bidirectional LSTM for Natural Language Inference
TLDR
A novel dependent reading bidirectional LSTM network (DR-BiLSTM) is proposed to efficiently model the relationship between a premise and a hypothesis during encoding and inference in the natural language inference (NLI) task. Expand
Structured Alignment Networks for Matching Sentences
TLDR
This work introduces a model of structured alignments between sentences, showing how to compare two sentences by matching their latent structures, and finds that modeling latent tree structures results in superior performance. Expand
Tag-Enhanced Tree-Structured Neural Networks for Implicit Discourse Relation Classification
TLDR
This work employs the Tree-LSTM model and Tree-GRU model, which is based on the tree structure, to encode the arguments in a relation and further leverage the constituent tags to control the semantic composition process in these tree-structured neural networks. Expand
Improving Natural Language Inference Using External Knowledge in the Science Questions Domain
TLDR
A combination of techniques that harness knowledge graphs to improve performance on the NLI problem in the science questions domain and achieves the new state-of-the-art performance over the SciTail science questions dataset. Expand
Structured Attention Networks
TLDR
This work shows that structured attention networks are simple extensions of the basic attention procedure, and that they allow for extending attention beyond the standard soft-selection approach, such as attending to partial segmentations or to subtrees. Expand
Redundancy Localization for the Conversationalization of Unstructured Responses
TLDR
A new task, known as redundancy localization, is proposed, which aims to pinpoint semantic overlap between text passages and demonstrates superior performance compared to a state-of-the-art entailment model and yields encouraging results when applied to a real-world dialogue. Expand
Attentive Tree-structured Network for Monotonicity Reasoning
TLDR
An attentive treestructured neural network that consists of a treebased long-short-term-memory network (TreeLSTM) with soft attention designed to model the syntactic parse tree information from the sentence pair of a reasoning task. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 29 REFERENCES
Reasoning about Entailment with Neural Attention
TLDR
This paper proposes a neural model that reads two sentences to determine entailment using long short-term memory units and extends this model with a word-by-word neural attention mechanism that encourages reasoning over entailments of pairs of words and phrases, and presents a qualitative analysis of attention weights produced by this model. Expand
Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank
TLDR
A Sentiment Treebank that includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality, and introduces the Recursive Neural Tensor Network. Expand
Probabilistic Tree-Edit Models with Structured Latent Variables for Textual Entailment and Question Answering
TLDR
This work captures the alignment by using a novel probabilistic model that models tree-edit operations on dependency parse trees and treats alignments as structured latent variables, and offers a principled framework for incorporating complex linguistic features. Expand
A Latent Discriminative Model for Compositional Entailment Relation Recognition using Natural Logic
TLDR
This paper proposes a latent discriminative model that unifies a statistical framework and a theory of Natural Logic to capture complex interactions between linguistic phenomena and suggests that alignments can be detrimental to performance if used in a manner that prevents the learning of globally optimal alignments. Expand
Learning Natural Language Inference with LSTM
TLDR
A special long short-term memory (LSTM) architecture for NLI that remembers important mismatches that are critical for predicting the contradiction or the neutral relationship label and achieves an accuracy of 86.1%, outperforming the state of the art. Expand
A large annotated corpus for learning natural language inference
TLDR
The Stanford Natural Language Inference corpus is introduced, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning, which allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time. Expand
A machine learning approach to textual entailment recognition
Designing models for learning textual entailment recognizers from annotated examples is not an easy task, as it requires modeling the semantic relations and interactions involved between two pairs ofExpand
Structural Representations for Learning Relations between Pairs of Texts
TLDR
This paper defines syntactic and semantic structures representing the text pairs and then applies graph and tree kernels to them for automatically engineering features in Support Vector Machines for achieving the highest accuracy in two different and important related tasks, i.e., Paraphrasing Identification and Textual Entailment Recognition. Expand
An extended model of natural logic
TLDR
A model of natural language inference which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation is proposed, extending past work in natural logic by incorporating both semantic exclusion and implicativity. Expand
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
TLDR
The Tree-LSTM is introduced, a generalization of LSTMs to tree-structured network topologies that outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences and sentiment classification. Expand
...
1
2
3
...