• Publications
  • Influence
CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge
TLDR
This work presents CommonsenseQA: a challenging new dataset for commonsense question answering, which extracts from ConceptNet multiple target concepts that have the same semantic relation to a single source concept. Expand
TaPas: Weakly Supervised Table Parsing via Pre-training
TLDR
TaPas is presented, an approach to question answering over tables without generating logical forms that outperforms or rivals semantic parsing models by improving state-of-the-art accuracy on SQA and performing on par with the state of theart on WikiSQL and WikiTQ, but with a simpler model architecture. Expand
TalkSumm: A Dataset and Scalable Annotation Method for Scientific Paper Summarization Based on Conference Talks
TLDR
This paper proposes a novel method that automatically generates summaries for scientific papers, by utilizing videos of talks at scientific conferences, and hypothesizes that such talks constitute a coherent and concise description of the papers’ content, and can form the basis for good summaries. Expand
Don’t paraphrase, detect! Rapid and Effective Data Collection for Semantic Parsing
TLDR
A new data collection approach is proposed that combines crowdsourcing with a paraphrase model to detect correct logical forms for the unlabeled utterances and quantifies the effects of these mismatches. Expand
An author-reader influence model for detecting topic-based influencers in social media
TLDR
A novel behavioral model of authors and readers, where authors try to influence readers by generating ``emph{attractive}" content, which is both relevant and relevant topic interests of users, is devised. Expand
Span-based Semantic Parsing for Compositional Generalization
TLDR
This work proposes SpanBasedSP, a parser that predicts a span tree over an input utterance, explicitly encoding how partial programs compose over spans in the input, which performs similarly to strong seq2seq baselines on random splits, but dramatically improves performance on splits that require compositional generalization. Expand
Unlocking Compositional Generalization in Pre-trained Models Using Intermediate Representations
TLDR
It is highlighted that intermediate representations provide an important and potentially overlooked degree of freedom for improving the compositional generalization abilities of pre-trained seq2seq models. Expand
Neural Semantic Parsing over Multiple Knowledge-bases
TLDR
This paper finds that it can substantially improve parsing accuracy by training a single sequence-to-sequence model over multiple KBs, when providing an encoding of the domain at decoding time. Expand
Neural Response Generation for Customer Service based on Personality Traits
TLDR
A neural response generation model that generates responses conditioned on a target personality that achieves performance improvements in both perplexity and BLEU scores over a baseline sequence-to-sequence model, and is validated by human judges. Expand
Classifying Emotions in Customer Support Dialogues in Social Media
TLDR
It is shown that, in addition to text based turn features, dialogue features can significantly improve detection of emotions in social media customer service dialogues and help predict emotional techniques used by customer service agents. Expand
...
1
2
3
...