Higher-order Lexical Semantic Models for Non-factoid Answer Reranking

@article{Fried2015HigherorderLS,
  title={Higher-order Lexical Semantic Models for Non-factoid Answer Reranking},
  author={Daniel Fried and Peter Alexander Jansen and Gus Hahn-Powell and Mihai Surdeanu and Peter Clark},
  journal={Transactions of the Association for Computational Linguistics},
  year={2015},
  volume={3},
  pages={197-210}
}
Lexical semantic models provide robust performance for question answering, but, in general, can only capitalize on direct evidence seen during training. For example, monolingual alignment models acquire term alignment probabilities from semi-structured data such as question-answer pairs; neural network language models learn term embeddings from unstructured text. All this knowledge is then used to estimate the semantic similarity between question and answer candidates. We introduce a higher… 

Creating Causal Embeddings for Question Answering with Minimal Supervision

This work argues that a better approach is to look for answers that are related to the question in a relevant way, according to the information need of the question, which may be determined through task-specific embeddings, and implements causality as a use case.

Analyzing Linguistic Features for Answer Re-Ranking of Why-Questions

This paper addresses why-type non-factoid questions by exploring lexico-syntactic, semantic and contextual query-dependent features, some of which are based on deep learning frameworks to depict the probability of answer candidate being relevant for the question.

Spinning Straw into Gold: Using Free Text to Train Monolingual Alignment Models for Non-factoid Question Answering

It is shown that these alignment models trained directly from discourse structures imposed on free text improve performance considerably over an information retrieval baseline and a neural network language model trained on the same data.

Multi-hop Inference for Sentence-level TextGraphs: How Challenging is Meaningfully Combining Information for Science Question Answering?

This work empirically characterize the difficulty of building or traversing a graph of sentences connected by lexical overlap, by evaluating chance sentence aggregation quality through 9,784 manually-annotated judgements across knowledge graphs built from three free-text corpora.

Extracting Common Inference Patterns from Semi-Structured Explanations

This work presents a prototype tool for identifying common inference patterns from corpora of semi-structured explanations, and uses it to successfully extract 67 inference patternsfrom a “matter” subset of standardized elementary science exam questions that span scientific and world knowledge.

TextGraphs 2019 Shared Task on Multi-Hop Inference for Explanation Regeneration

The Shared Task on Multi-Hop Inference for Explanation Regeneration tasks participants with regenerating detailed gold explanations for standardized elementary science exam questions by selecting facts from a knowledge base of semi-structured tables.

QASC: A Dataset for Question Answering via Sentence Composition

This work presents a multi-hop reasoning dataset, Question Answering via Sentence Composition (QASC), that requires retrieving facts from a large corpus and composing them to answer a multiple-choice question, and provides annotation for supporting facts as well as their composition.

Sanity Check: A Strong Alignment and Information Retrieval Baseline for Question Answering

An unsupervised, simple, and fast alignment and informa- tion retrieval baseline that incorporates two novel contributions: a one-to-many alignment between query and document terms and negative alignment as a proxy for discriminative information.

Framing QA as Building and Ranking Intersentence Answer Justifications

A question answering approach for standardized science exams that both identifies correct answers and produces compelling human-readable justifications for why those answers are correct is proposed, and it is shown that information aggregation is key to addressing the information need in complex questions.

Ranking Facts for Explaining Answers to Elementary Science Questions

Considering automated reasoning for elementary science question answering, this work addresses the novel task of generating explanations for answers from human-authored facts using a practically scalable framework of feature-rich support vector machines leveraging domain-targeted, hand-crafted features.
...

References

SHOWING 1-10 OF 40 REFERENCES

Learning to Rank Answers to Non-Factoid Questions from Web Collections

This work shows that it is possible to exploit existing large collections of question–answer pairs to extract such features and train ranking models which combine them effectively, providing one of the most compelling evidence to date that complex linguistic features such as word senses and semantic roles can have a significant impact on large-scale information retrieval tasks.

Discourse Complements Lexical Semantics for Non-factoid Answer Reranking

We propose a robust answer reranking model for non-factoid questions that integrates lexical semantics with discourse information, driven by two representations of discourse: a shallow representation

Question Answering Using Enhanced Lexical Semantic Models

This work focuses on improving the performance using models of lexical semantic resources and shows that these systems can be consistently and significantly improved with rich lexical semantics information, regardless of the choice of learning algorithms.

Automatic question answering using the web: Beyond the Factoid

A Question Answering (QA) system that goes beyond answering factoid questions is described and evaluated, by comparing the performance of baseline algorithms against the proposed algorithms for various modules in the QA system.

Selectional Preferences for Semantic Role Classification

This paper demonstrates that the SRC task is better modeled by SP models centered on both verbs and prepositions, rather than verbs alone, and explores a range of models based on WordNet and distributional-similarity SPs.

Statistical Machine Translation for Query Expansion in Answer Retrieval

We present an approach to query expansion in answer retrieval that uses Statistical Machine Translation (SMT) techniques to bridge the lexical gap between questions and answers. SMT-based query

Bridging the lexical chasm: statistical approaches to answer-finding

It is shown that the task of “answer-finding” differs from both document retrieval and tradition question-answering, presenting challenges different from those found in these problems.

Learning Parameters in Entity Relationship Graphs from Ranking Preferences

A unified model for ranking in ER graphs is presented, and an algorithm to learn the parameters of the model is proposed, which can satisfy training preferences and generalize to test preferences, and estimate meaningful model parameters that represent the relative importance of ER types.

Don’t count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors

An extensive evaluation of context-predicting models with classic, count-vector-based distributional semantic approaches, on a wide range of lexical semantics tasks and across many parameter settings shows that the buzz around these models is fully justified.

Relational retrieval using a combination of path-constrained random walks

A novel learnable proximity measure is described which instead uses one weight per edge label sequence: proximity is defined by a weighted combination of simple “path experts”, each corresponding to following a particular sequence of labeled edges.