SemEval-2017 Task 3: Community Question Answering

@inproceedings{Nakov2017SemEval2017T3,
  title={SemEval-2017 Task 3: Community Question Answering},
  author={Preslav Nakov and Doris Hoogeveen and Llu{\'i}s M{\`a}rquez i Villodre and Alessandro Moschitti and Hamdy Mubarak and Timothy Baldwin and Karin M. Verspoor},
  booktitle={*SEMEVAL},
  year={2017}
}
We describe SemEval–2017 Task 3 on Community Question Answering. This year, we reran the four subtasks from SemEval-2016: (A) Question–Comment Similarity, (B) Question–Question Similarity, (C) Question–External Comment Similarity, and (D) Rerank the correct answers for a new question in Arabic, providing all the data from 2015 and 2016 for training, and fresh data for testing. Additionally, we added a new subtask E in order to enable experimentation with Multi-domain Question Duplicate… 

Figures and Tables from this paper

ECNU at SemEval-2017 Task 3: Using Traditional and Deep Learning Methods to Address Community Question Answering Task

This paper describes the systems we submitted to the task 3 (Community Question Answering) in SemEval 2017 which contains three subtasks on English corpora, i.e., subtask A: Question-Comment

NLM_NIH at SemEval-2017 Task 3: from Question Entailment to Question Similarity for Community Question Answering

TLDR
The authors' feature-based system for Recognizing Question Entailment (RQE) was adapted to the question similarity task and outperformed the best system of the 2016 challenge in all measures.

KeLP at SemEval-2017 Task 3: Learning Pairwise Patterns in Community Question Answering

TLDR
The KeLP system participating in the SemEval-2017 community Question Answering (cQA) task shows that the proposed framework, which has minor variations among the three subtasks, is extremely flexible and effective in tackling learning tasks defined on sentence pairs.

MoRS at SemEval-2017 Task 3: Easy to use SVM in Ranking Tasks

TLDR
This paper describes the system, dubbed MoRS (Modular Ranking System), pronounced ‘Morse’, which participated in Task 3 of SemEval-2017, which consisted on reordering a set of comments according to their usefulness in answering the question in the thread.

TakeLab-QA at SemEval-2017 Task 3: Classification Experiments for Answer Retrieval in Community QA

In this paper we present the TakeLab-QA entry to SemEval 2017 task 3, which is a question-comment re-ranking problem. We present a classification based approach, including two supervised learning

IIT-UHH at SemEval-2017 Task 3: Exploring Multiple Features for Community Question Answering and Implicit Dialogue Identification

TLDR
A Support Vector Machine (SVM) based system that makes use of textual, domain-specific, word-embedding and topic-modeling features, and a novel method for dialogue chain identification in comment threads is proposed.

bunji at SemEval-2017 Task 3: Combination of Neural Similarity Features and Comment Plausibility Features

TLDR
A text-ranking system developed by bunji team in SemEval-2017 Task 3: Community Question Answering, Subtask A and C is proposed that combines neural similarity features and hand-crafted comment plausibility features, and inter-comments relationship is modeled using conditional random field.

EICA Team at SemEval-2017 Task 3: Semantic and Metadata-based Features for Community Question Answering

TLDR
A system for participating in SemEval-2017 Task 3 on Community Question Answering relies on combining a rich set of various types of features: semantic and metadata, which turned out to be the metadata feature and the semantic vectors trained on QatarLiving data.

Beihang-MSRA at SemEval-2017 Task 3: A Ranking System with Neural Matching Features for Community Question Answering

TLDR
This paper develops a ranking system that is capable of capturing semantic relations between text pairs with little word overlap and introduces several neural network based matching features which enable the system to measure text similarity beyond lexicons.

Talla at SemEval-2017 Task 3: Identifying Similar Questions Through Paraphrase Detection

TLDR
This paper describes the approach to the SemEval-2017 shared task of determining question-question similarity in a community question-answering setting, which extracted both syntactic and semantic similarity features between candidate questions and trained a random forest classifier to predict whether the candidate questions were paraphrases of each other.
...

References

SHOWING 1-10 OF 125 REFERENCES

SemEval-2016 Task 3: Community Question Answering

TLDR
This paper describes the SemEval–2016 Task 3 on Community Question Answering, which was offered in English and Arabic, and found the best systems achieved an official score of 79.19, 76.70, 55.41, and 45.83 in subtasks A, B, C, and D.

SemEval-2015 Task 3: Answer Selection in Community Question Answering

TLDR
SemEval2015 Task 3 on Answer Selection in cQA included two subtasks: (a) classifying answers as good, bad, or potentially relevant with respect to the question, and (b) answering a YES/NO question with yes, no, or unsure, based on the list of all answers.

ECNU at SemEval-2017 Task 3: Using Traditional and Deep Learning Methods to Address Community Question Answering Task

This paper describes the systems we submitted to the task 3 (Community Question Answering) in SemEval 2017 which contains three subtasks on English corpora, i.e., subtask A: Question-Comment

ECNU at SemEval-2016 Task 3: Exploring Traditional Method and Deep Learning Method for Question Retrieval and Answer Ranking in Community Question Answering

TLDR
This paper proposed two novel methods to improve semantic similarity estimation between question-question pair by integrating the rank information of questioncomment pair and implemented a two-step strategy to select out the similar questions and filter the unrelated comments with respect to the original question.

NLM_NIH at SemEval-2017 Task 3: from Question Entailment to Question Similarity for Community Question Answering

TLDR
The authors' feature-based system for Recognizing Question Entailment (RQE) was adapted to the question similarity task and outperformed the best system of the 2016 challenge in all measures.

Overfitting at SemEval-2016 Task 3: Detecting Semantically Similar Questions in Community Question Answering Forums with Word Embeddings

TLDR
This paper presents an approach for estimating the question-question similarity of an English dataset specified in Shared Task 3, subtask B of SemEval-2016, and uses a 2-layer feed-forward neural network with the averages of word embedding vectors to predict the semantic similarity score of two questions.

KeLP at SemEval-2017 Task 3: Learning Pairwise Patterns in Community Question Answering

TLDR
The KeLP system participating in the SemEval-2017 community Question Answering (cQA) task shows that the proposed framework, which has minor variations among the three subtasks, is extremely flexible and effective in tackling learning tasks defined on sentence pairs.

MoRS at SemEval-2017 Task 3: Easy to use SVM in Ranking Tasks

TLDR
This paper describes the system, dubbed MoRS (Modular Ranking System), pronounced ‘Morse’, which participated in Task 3 of SemEval-2017, which consisted on reordering a set of comments according to their usefulness in answering the question in the thread.

TakeLab-QA at SemEval-2017 Task 3: Classification Experiments for Answer Retrieval in Community QA

In this paper we present the TakeLab-QA entry to SemEval 2017 task 3, which is a question-comment re-ranking problem. We present a classification based approach, including two supervised learning

MTE-NN at SemEval-2016 Task 3: Can Machine Translation Evaluation Help Community Question Answering?

TLDR
A system for answer ranking (SemEval-2016 Task 3, subtask A) that is a direct adaptation of a pairwise neural network model for machine translation evaluation (MTE), and it efficiently models complex non-linear interactions between them.
...