Creating Causal Embeddings for Question Answering with Minimal Supervision

@inproceedings{Sharp2016CreatingCE,
  title={Creating Causal Embeddings for Question Answering with Minimal Supervision},
  author={Rebecca Sharp and Mihai Surdeanu and Peter Alexander Jansen and Peter Clark and Michael Hammond},
  booktitle={EMNLP},
  year={2016}
}
A common model for question answering (QA) is that a good answer is one that is closely related to the question, where relatedness is often determined using general-purpose lexical models such as word embeddings. We argue that a better approach is to look for answers that are related to the question in a relevant way, according to the information need of the question, which may be determined through task-specific embeddings. With causality as a use case, we implement this insight in three steps… 

Figures and Tables from this paper

Answering Binary Causal Questions: A Transfer Learning Based Approach

TLDR
A transfer learning-based approach which fine-tunes pretrained transformer based language models on a small dataset of cause-effect pairs to detect causality and answer binary causal questions is proposed.

Enhancing Multiple-Choice Question Answering with Causal Knowledge

TLDR
Novel strategies for the representation of causal knowledge are presented and the empirical results demonstrate the efficacy of augmenting pretrained models with external causal knowledge for multiple-choice causal question answering.

Answering Binary Causal Questions Through Large-Scale Text Mining: An Evaluation Using Cause-Effect Pairs from Human Experts

TLDR
The goal is to analyze the ability of an AI agent built using state-of-the-art unsupervised methods in answering causal questions derived from collections of cause-effect pairs from human experts.

Learning Faithful Representations of Causal Graphs

TLDR
By incorporating the faithfulness property of contextual embeddings to capture geometric distance-based properties of directed acyclic causal graphs, this paper learns textembeddings that are 31.3% more faithful to human validated causal graphs and achieve 21.1% better Precision-Recall AUC in a link prediction fine-tuning task.

Semi-Distantly Supervised Neural Model for Generating Compact Answers to Open-Domain Why Questions

TLDR
This work aims at generating non-redundant compact answers to why-questions from answer passages retrieved from a very large web data corpora by an already existing open-domain why-question answering system, using a novel neural network obtained by extending existing summarization methods.

How to evaluate word embeddings? On importance of data efficiency and simple supervised tasks

TLDR
It is proposed that evaluation of word representation evaluation should focus on data efficiency and simple supervised tasks, where the amount of available data is varied and scores of a supervised model are reported for each subset (as commonly done in transfer learning).

Exploiting Background Knowledge in Compact Answer Generation for Why-Questions

TLDR
A novel neural summarizer that combines a recurrent neural network-based encoderdecoder model with stacked convolutional neural networks and was designed to effectively exploit background knowledge, in this case a set of causal relations that was extracted from a large web data archive.

Open-Domain Why-Question Answering with Adversarial Learning to Encode Answer Texts

TLDR
This paper uses the proposed “Adversarial networks for Generating compact-answer Representation” (AGR) to generate from a passage a vector representation of the non-redundant reason sought by a why-question and exploit the representation for judging whether the passage actually answers the why- Question.

Lightly-supervised Representation Learning with Global Interpretability

We propose a lightly-supervised approach for information extraction, in particular named entity classification, which combines the benefits of traditional bootstrapping, i.e., use of limited

Tansformer based Natural Language Generation for Question-Answering

TLDR
This work aims at generating a concise answer for a given question using an unsupervised approach that does not require annotated data and shows very promising results.

References

SHOWING 1-10 OF 46 REFERENCES

Higher-order Lexical Semantic Models for Non-factoid Answer Reranking

TLDR
This work introduces a higher-order formalism that allows all these lexical semantic models to chain direct evidence to construct indirect associations between question and answer texts, by casting the task as the traversal of graphs that encode direct term associations.

Automatic question answering using the web: Beyond the Factoid

TLDR
A Question Answering (QA) system that goes beyond answering factoid questions is described and evaluated, by comparing the performance of baseline algorithms against the proposed algorithms for various modules in the QA system.

Joint Relational Embeddings for Knowledge-based Question Answering

TLDR
This paper proposes a novel embedding-based approach that maps NL-questions into LFs for KBQA by leveraging semantic associations between lexical representations and KBproperties in the latent space.

Learning to Rank Answers to Non-Factoid Questions from Web Collections

TLDR
This work shows that it is possible to exploit existing large collections of question–answer pairs to extract such features and train ranking models which combine them effectively, providing one of the most compelling evidence to date that complex linguistic features such as word senses and semantic roles can have a significant impact on large-scale information retrieval tasks.

Why-Question Answering using Intra- and Inter-Sentential Causal Relations

TLDR
This is the first work that uses both intra- and inter-sentential causal relations for why-QA, and a method for assessing the appropriateness of causal relations as answers to a given question using the semantic orientation of excitation proposed by Hashimoto et al. (2012).

Question Answering Using Enhanced Lexical Semantic Models

TLDR
This work focuses on improving the performance using models of lexical semantic resources and shows that these systems can be consistently and significantly improved with rich lexical semantics information, regardless of the choice of learning algorithms.

Question Answering with Subgraph Embeddings

TLDR
A system which learns to answer questions on a broad range of topics from a knowledge base using few hand-crafted features, using low-dimensional embeddings of words and knowledge base constituents to score natural language questions against candidate answers.

Bridging the lexical chasm: statistical approaches to answer-finding

TLDR
It is shown that the task of “answer-finding” differs from both document retrieval and tradition question-answering, presenting challenges different from those found in these problems.

Discourse Complements Lexical Semantics for Non-factoid Answer Reranking

We propose a robust answer reranking model for non-factoid questions that integrates lexical semantics with discourse information, driven by two representations of discourse: a shallow representation

Deep Unordered Composition Rivals Syntactic Methods for Text Classification

TLDR
This work presents a simple deep neural network that competes with and, in some cases, outperforms such models on sentiment analysis and factoid question answering tasks while taking only a fraction of the training time.