The Third PASCAL Recognizing Textual Entailment Challenge

  title={The Third PASCAL Recognizing Textual Entailment Challenge},
  author={Danilo Giampiccolo and Bernardo Magnini and Ido Dagan and William B. Dolan},

Topics from this paper

A transformation-driven approach for recognizing textual entailment†
This work presents an approach based on syntactic transformations and machine learning techniques which is designed to fit well with a new type of available data sets that are larger but less complex than data sets used in the past.
Entailment-based Fully Automatic Technique for Evaluation of Summaries
A fully automatic technique for evaluating text summaries without the need to prepare the gold standard summaries, based on a combination of lexical entailment module, lexical distance module, Chunk module, Named Entity module and syntactic text entailment (TE) module.
What do We Know about Conversation Participants: Experiments on Conversation Entailment
The challenges related to conversation entailment are described based on the collected data and a probabilistic framework that incorporates conversation context in entailment prediction is presented.
Understanding by Understanding Not: Modeling Negation in Language Models
By training BERT with the resulting combined objective of an unlikelihood objective that is based on negated generic sentences from a raw text corpus, this work reduces the mean top 1 error rate to 4% on the negated LAMA dataset.
FENAS: Flexible and Expressive Neural Architecture Search
This work proposes a novel architecture search algorithm called Flexible and Expressible Neural Architecture Search (FENAS), with more flexible and expressible search space than ENAS, in terms of more activation functions, input edges, and atomic operations.
Pretraining with Contrastive Sentence Objectives Improves Discourse Performance of Language Models
Conpono, an inter-sentence objective for pretraining language models that models discourse coherence and the distance between sentences is proposed, and it is shown that Conpono yields gains of 2%-6% absolute even for tasks that do not explicitly evaluate discourse: textual entailment, common sense reasoning and reading comprehension.
Commonsense Reasoning for Natural Language Understanding: A Survey of Benchmarks, Resources, and Approaches
This paper aims to provide an overview of existing tasks and benchmarks, knowledge resources, and learning and inference approaches toward commonsense reasoning for natural language understanding to support a better understanding of the state of the art, its limitations, and future challenges.
Unifying Question Answering and Text Classification via Span Extraction
A unified, span-extraction approach leads to superior or comparable performance in multi-task learning, low-data and supplementary supervised pretraining experiments on several text classification and question answering benchmarks.
Sentiment-Stance-Specificity (SSS) Dataset: Identifying Support-based Entailment among Opinions
A set of rules is proposed based on three different components - sentiment, stance and specificity to automatically predict support-based entailment based on opinions present in hotel reviews using a distant supervision approach.
Analyzable Legal Yes/No Question Answering System using Linguistic Structures
A yes/no question answer-ing system for answering questions in a legal domain that uses linguistic analysis, in order to find correspondences of predicates and arguments given problem sentences and knowledge source sentences, shows that precise linguistic anal-yses are effective even without the big data approach with machine learning.