New York University
Author pages are created from data sourced from our academic publisher partnerships and public sources.
Share This Author
Multitask Prompted Training Enables Zero-Shot Task Generalization
A system for easily mapping any natural language tasks into a human-readable, prompted form and a pretrained encoder-decoder model that attains strong zero-shot performance on several standard datasets, often outperforming models up to 16x its size.
Retrieval Enhanced Model for Commonsense Generation
A novel framework using retrieval methods to enhance both the pre-training and fine-tuning for commonsense generation by retrieving prototype sentence candidates by concept matching and using them as auxiliary input.
Enhancing Generalization in Natural Language Inference by Syntax
This work investigates the use of dependency trees to enhance the generalization of BERT in the NLI task, leveraging on a graph convolutional network to represent a syntax-based matching graph with heterogeneous matching patterns.