Share This Author
Creating Causal Embeddings for Question Answering with Minimal Supervision
- Rebecca Sharp, M. Surdeanu, Peter Alexander Jansen, Peter Clark, Michael Hammond
- Computer ScienceEMNLP
- 1 September 2016
This work argues that a better approach is to look for answers that are related to the question in a relevant way, according to the information need of the question, which may be determined through task-specific embeddings, and implements causality as a use case.
Deep Affix Features Improve Neural Named Entity Recognizers
A practical model for named entity recognition (NER) that combines word and character-level information with a specific learned representation of the prefixes and suffixes of the word is proposed and achieves state of the art results on the CoNLL 2002 Spanish and Dutch and coNLL 2003 German NER datasets.
Framing QA as Building and Ranking Intersentence Answer Justifications
A question answering approach for standardized science exams that both identifies correct answers and produces compelling human-readable justifications for why those answers are correct is proposed, and it is shown that information aggregation is key to addressing the information need in complex questions.
MathAlign: Linking Formula Identifiers to their Contextual Natural Language Descriptions
- M. Alexeeva, Rebecca Sharp, M. A. Valenzuela-Escarcega, Jennifer Kadowaki, A. Pyarelal, C. Morrison
- Computer ScienceLREC
- 1 May 2020
A rule-based approach is proposed for this task, which extracts LaTeX representations of formula identifiers and links them to their in-text descriptions, given only the original PDF and the location of the formula of interest.
Spinning Straw into Gold: Using Free Text to Train Monolingual Alignment Models for Non-factoid Question Answering
It is shown that these alignment models trained directly from discourse structures imposed on free text improve performance considerably over an information retrieval baseline and a neural network language model trained on the same data.
On the Importance of Delexicalization for Fact Verification
This work investigates the importance that a model assigns to various aspects of data while learning and making predictions, specifically, in a recognizing textual entailment (RTE) task, and finds that most of the weights are assigned to noun phrases.
Sanity Check: A Strong Alignment and Information Retrieval Baseline for Question Answering
An unsupervised, simple, and fast alignment and informa- tion retrieval baseline that incorporates two novel contributions: a one-to-many alignment between query and document terms and negative alignment as a proxy for discriminative information.
The phonetic specificity of contrastive hyperarticulation in natural speech
Eidos, INDRA, & Delphi: From Free Text to Executable Causal Models
This paper introduces an approach that builds executable probabilistic models from raw, free text from Eidos, INDRA, and Delphi, an open-domain machine reading system designed to extract causal relations from natural language.
Tell Me Why: Using Question Answering as Distant Supervision for Answer Justification
- Rebecca Sharp, M. Surdeanu, Peter Alexander Jansen, M. A. Valenzuela-Escarcega, Peter Clark, Michael Hammond
- Computer ScienceCoNLL
- 1 August 2017
A neural network architecture for QA that reranks answer justifications as an intermediate (and human-interpretable) step in answer selection and shows that with this end-to-end approach it is able to significantly improve upon a strong IR baseline in both justification ranking and answer selection.