• Publications
  • Influence
Multitask Prompted Training Enables Zero-Shot Task Generalization
TLDR
A system for easily mapping any natural language tasks into a human-readable, prompted form and a pretrained encoder-decoder model that attains strong zero-shot performance on several standard datasets, often outperforming models up to 16x its size.
Retrieval Enhanced Model for Commonsense Generation
TLDR
A novel framework using retrieval methods to enhance both the pre-training and fine-tuning for commonsense generation by retrieving prototype sentence candidates by concept matching and using them as auxiliary input.
Enhancing Generalization in Natural Language Inference by Syntax
TLDR
This work investigates the use of dependency trees to enhance the generalization of BERT in the NLI task, leveraging on a graph convolutional network to represent a syntax-based matching graph with heterogeneous matching patterns.