Spinning Straw into Gold: Using Free Text to Train Monolingual Alignment Models for Non-factoid Question Answering

@inproceedings{Sharp2015SpinningSI,
  title={Spinning Straw into Gold: Using Free Text to Train Monolingual Alignment Models for Non-factoid Question Answering},
  author={Rebecca Sharp and Peter Jansen and M. Surdeanu and P. Clark},
  booktitle={HLT-NAACL},
  year={2015}
}
Monolingual alignment models have been shown to boost the performance of question answering systems by ”bridging the lexical chasm” between questions and answers. The main limitation of these approaches is that they require semistructured training data in the form of question-answer pairs, which is difficult to obtain in specialized domains or lowresource languages. We propose two inexpensive methods for training alignment models solely using free text, by generating artificial question-answer… Expand
Framing QA as Building and Ranking Intersentence Answer Justifications
Paraphrase-focused learning to rank for domain-specific frequently asked questions retrieval
Distant Supervision for Relation Extraction beyond the Sentence Boundary
...
1
2
...

References

SHOWING 1-10 OF 26 REFERENCES
Higher-order Lexical Semantic Models for Non-factoid Answer Reranking
Automatic question answering using the web: Beyond the Factoid
Discourse Complements Lexical Semantics for Non-factoid Answer Reranking
Question Answering Using Enhanced Lexical Semantic Models
HILDA: A Discourse Parser Using Support Vector Machine Classification
Back to Basics for Monolingual Alignment: Exploiting Word Similarity and Contextual Evidence
Bridging the lexical chasm: statistical approaches to answer-finding
Annotated Gigaword
...
1
2
3
...