Zero-shot Commonsense Question Answering with Cloze Translation and Consistency Optimization

@article{Dou2022ZeroshotCQ,
  title={Zero-shot Commonsense Question Answering with Cloze Translation and Consistency Optimization},
  author={Zi-Yi Dou and Nanyun Peng},
  journal={ArXiv},
  year={2022},
  volume={abs/2201.00136}
}
Commonsense question answering (CQA) aims to test if models can answer questions regarding commonsense knowledge that everyone knows. Prior works that incorporate external knowledge bases have shown promising results, but knowledge bases are expensive to construct and are often limited to a fixed set of relations. In this paper, we instead focus on better utilizing the implicit knowledge stored in pre-trained language models. While researchers have found that the knowledge embedded in pre… 

Figures and Tables from this paper

ISD-QA: Iterative Distillation of Commonsense Knowledge from General Language Models for Unsupervised Question Answering

This paper proposes a novel framework called Iterative Self Distillation for QA (ISD-QA), which extracts the "dark knowledge" encoded during largescale pre-training of language models to provide supervision for commonsense question answering by distilling knowledge from language models in an unsupervised manner.

An Empirical Investigation of Commonsense Self-Supervision with Knowledge Graphs

The effect of different synthetic datasets on language models with various architectures and sizes is studied to show that encoder-decoder models benefit from more data to learn from, whereas sampling strategies that balance across different aspects yield best performance.

MICO: A Multi-alternative Contrastive Learning Framework for Commonsense Knowledge Representation

This paper proposes to learn commonsense knowledge representation by MICO, a M ulti-alternative contrast learning framework on CO mmonsense knowledge graphs (MICO), and shows the effectiveness of this method.

Just ClozE! A Fast and Simple Method for Evaluating the Factual Consistency in Abstractive Summarization

This paper demonstrates that ClozE can reduce the evaluation time by nearly 96 % relative to QA-based metrics while retaining their interpretability and performance through experiments on six human-annotated datasets and a meta-evaluation benchmark GO FIGURE (Gabriel et al., 2020).

Toxicity Detection with Generative Prompt-based Inference

This work explores the generative variant of zero-shot prompt-based toxicity detection with comprehensive trials on prompt engineering and highlights the strengths of its generative classification approach both quantitatively and qualitatively.

References

SHOWING 1-10 OF 43 REFERENCES

Knowledge-driven Self-supervision for Zero-shot Commonsense Question Answering

A novel neuro-symbolic framework for zero-shot question answering across commonsense tasks is proposed and it is shown that, while an individual knowledge graph is better suited for specific tasks, a global knowledge graph brings consistent gains across different tasks.

Evaluating Commonsense in Pre-trained Language Models

This work studies the commonsense ability of GPT, BERT, XLNet, and RoBERTa by testing them on seven challenging benchmarks, finding that language modeling and its variants are effective objectives for promoting models' commonsens ability while bi-directional context and larger training set are bonuses.

Align, Mask and Select: A Simple Method for Incorporating Commonsense Knowledge into Language Representation Models

Experimental results demonstrate that pre-training models using the proposed approach followed by fine-tuning achieve significant improvements over previous state-of-the-art models on two commonsense-related benchmarks, including CommonsenseQA and Winograd Schema Challenge.

CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge

This work presents CommonsenseQA: a challenging new dataset for commonsense question answering, which extracts from ConceptNet multiple target concepts that have the same semantic relation to a single source concept.

Unsupervised Question Answering by Cloze Translation

It is found that modern QA models can learn to answer human questions surprisingly well using only synthetic training data, and is demonstrated that, without using the SQuAD training data at all, this approach achieves 56.4 F1 on SQuad v1.

Commonsense for Generative Multi-Hop Question Answering Tasks

This work focuses on a more challenging multi-hop generative task (NarrativeQA), which requires the model to reason, gather, and synthesize disjoint pieces of information within the context to generate an answer.

Unsupervised Commonsense Question Answering with Self-Talk

An unsupervised framework based on self-talk as a novel alternative to multiple-choice commonsense tasks, inspired by inquiry-based discovery learning, which improves performance on several benchmarks and competes with models that obtain knowledge from external KBs.

Connecting the Dots: A Knowledgeable Path Generator for Commonsense Question Answering

This paper augments a general commonsense QA framework with a knowledgeable path generator by extrapolating over existing paths in a KG with a state-of-the-art language model, which learns to connect a pair of entities in text with a dynamic, and potentially novel, multi-hop relational path.

Dynamic Knowledge Graph Construction for Zero-shot Commonsense Question Answering

Empirical results on the SocialIQa and StoryCommonsense datasets in a zero-shot setting demonstrate that using commonsense knowledge models to dynamically construct and reason over knowledge graphs achieves performance boosts over pre-trained language models and usingknowledge models to directly evaluate answers.

Language Models as Knowledge Bases?

An in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-the-art pretrained language models finds that BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge.