Semi-supervised Question Retrieval with Gated Convolutions

@inproceedings{Lei2015SemisupervisedQR,
  title={Semi-supervised Question Retrieval with Gated Convolutions},
  author={Tao Lei and Hrishikesh Joshi and Regina Barzilay and T. Jaakkola and K. Tymoshenko and Alessandro Moschitti and Llu{\'i}s M{\`a}rquez i Villodre},
  booktitle={North American Chapter of the Association for Computational Linguistics},
  year={2015}
}
Question answering forums are rapidly growing in size with no effective automated ability to refer to and reuse answers already available for previous posted questions. [] Key Method We design a recurrent and convolutional model (gated convolution) to effectively map questions to their semantic representations. The models are pre-trained within an encoder-decoder framework (from body to title) on the basis of the entire raw corpus, and fine-tuned discriminatively from limited annotations. Our evaluation…

Figures and Tables from this paper

Joint Multitask Learning for Community Question Answering Using Task-Specific Embeddings

This work uses deep neural networks to learn meaningful task-specific embeddings, which it incorporates into a conditional random field model for the multitask setting, performing joint learning over a complex graph structure.

One-shot Learning for Question-Answering in Gaokao History Challenge

This work proposes a hybrid neural model for deep question-answering task from history examinations that employs a cooperative gated neural network to retrieve answers with the assistance of extra labels given by a neural turing machine labeler.

Multi-Stage Pretraining for Low-Resource Domain Adaptation

Transfer learning techniques are particularly useful in NLP tasks where a sizable amount of high-quality annotated data is difficult to obtain. Current approaches directly adapt a pre-trained

Question Retrieval for Community-based Question Answering via Heterogeneous Network Integration Learning

A novel framework named HNIL is proposed which encodes not only the question contents but also the askers social interactions to enhance the question embedding performance and applies random walk based learning method with recurrent neural network to match the similarities between askers question and historical questions proposed by other users.

ABiRCNN with neural tensor network for answer selection

An attention based bidirectional gated convolution with neural Tensor network (ABiRCNN+NTN), which can improve the representations for both questions and answers and model their interactions with a neural tensor network and achieves new state-of-the-art results on InsuranceQA dataset.

Principle-to-Program: Neural Methods for Similar Question Retrieval in Online Communities

Hands-on proposal will introduce each concept from end user and technique and present state of the art methods and a walkthrough of programs executed on Jupyter notebook using real-world datasets demonstrating principles introduced.

Semi-supervised Extractive Question Summarization Using Question-Answer Pairs

This paper proposes a framework to examine how to use such unlabeled paired data from the viewpoint of training methods and shows that multi-task training performs well with undersampling and distant supervision.

Neural Duplicate Question Detection without Labeled Training Data

This work proposes two novel methods—weak supervision using the title and body of a question, and the automatic generation of duplicate questions—and shows that both can achieve improved performances even though they do not require any labeled data.

Semi-Supervised Extractive Question Summarizer Using Question-Answer Pairs and its Learning Methods

We treat extractive summarization for questions. Neural extractive summarizers often require much labeled training data. Obtaining such labels is difficult, especially for user-generated content, such

A question-entailment approach to question answering

A novel QA approach based on Recognizing Question Entailment (RQE), which exceeds the best results of the medical task with a 29.8% increase over the best official score, and highlights the effectiveness of combining IR and RQE for future QA efforts.
...

References

SHOWING 1-10 OF 37 REFERENCES

LSTM-based Deep Learning Models for non-factoid answer selection

A general deep learning framework is applied for the answer selection task, which does not depend on manually defined features or linguistic tools, and is extended in two directions to define a more composite representation for questions and answers.

Learning Continuous Word Embedding with Metadata for Question Retrieval in Community Question Answering

This paper proposes to learn continuous word embeddings with metadata of category information within cQA pages for question retrieval with the framework of fisher kernel to deal with the variable size of word embedding vectors.

Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks

This paper presents a convolutional neural network architecture for reranking pairs of short texts, where the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data are learned.

A Convolutional Neural Network for Modelling Sentences

A convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) is described that is adopted for the semantic modelling of sentences and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations.

A Neural Attention Model for Abstractive Sentence Summarization

This work proposes a fully data-driven approach to abstractive sentence summarization by utilizing a local attention-based model that generates each word of the summary conditioned on the input sentence.

A Long Short-Term Memory Model for Answer Sentence Selection in Question Answering

The proposed method uses a stacked bidirectional Long-Short Term Memory network to sequentially read words from question and answer sentences, and then outputs their relevance scores, which outperforms previous work which requires syntactic features and external knowledge resources.

Word Embedding Based Correlation Model for Question/Answer Matching

A Word Embedding based Correlation (WEC) model is proposed by integrating advantages of both the translation model and word embedding, which can score their co-occurrence probability in Q&A pairs and leverage the continuity and smoothness of continuous space word representation.

Molding CNNs for text: non-linear, non-consecutive convolutions

This work revise the temporal convolution operation in CNNs to better adapt it to text processing by appealing to tensor algebra and using low-rank n-gram tensors to directly exploit interactions between words already at the convolution stage.

Applying deep learning to answer selection: A study and an open task

A general deep learning framework is applied to address the non-factoid question answering task and demonstrates superior performance compared to the baseline methods and various technologies give further improvements.

Improving Question Retrieval in Community Question Answering Using World Knowledge

This work proposes a way to build a concept thesaurus based on the semantic relations extracted from the world knowledge of Wikipedia and develops a unified framework to leverage these semantic relations in order to enhance the question similarity in the concept space.