Creating Causal Embeddings for Question Answering with Minimal Supervision

@article{Sharp2016CreatingCE,
  title={Creating Causal Embeddings for Question Answering with Minimal Supervision},
  author={Rebecca Sharp and Mihai Surdeanu and Peter Jansen and Peter Clark and Michael Hammond},
  journal={ArXiv},
  year={2016},
  volume={abs/1609.08097}
}
A common model for question answering (QA) is that a good answer is one that is closely related to the question, where relatedness is often determined using general-purpose lexical models such as word embeddings. We argue that a better approach is to look for answers that are related to the question in a relevant way, according to the information need of the question, which may be determined through task-specific embeddings. With causality as a use case, we implement this insight in three steps… CONTINUE READING

Citations

Publications citing this paper.
SHOWING 1-10 OF 19 CITATIONS

Distributed Representation of Words in Cause and Effect Spaces

VIEW 8 EXCERPTS
CITES METHODS & BACKGROUND
HIGHLY INFLUENCED

Weakly Supervised Multilingual Causality Extraction from Wikipedia

VIEW 2 EXCERPTS
CITES BACKGROUND

Finding Component State Transition Model Elements Using Neural Networks: An Empirical Study

VIEW 1 EXCERPT
CITES METHODS

References

Publications referenced by this paper.
SHOWING 1-10 OF 44 REFERENCES

Higher-order Lexical Semantic Models for Non-factoid Answer Reranking

VIEW 6 EXCERPTS

Keras. https:// github.com/fchollet/keras

  • F. Chollet
  • 2015
VIEW 2 EXCERPTS