• Publications
  • Influence
Learning from Task Descriptions
TLDR
This work introduces a framework for developing NLP systems that solve new tasks after reading their descriptions, synthesizing prior work in this area, and instantiates it with a new English language dataset, ZEST, structured for task-oriented evaluation on unseen tasks.
Humor Detection: A Transformer Gets the Last Laugh
TLDR
This paper builds a model that learns to identify humorous jokes based on ratings gleaned from Reddit pages, consisting of almost 16,000 labeled instances, and employs a Transformer architecture for its advantages in learning from sentence context.
Can Humor Prediction Datasets be used for Humor Generation? Humorous Headline Generation via Style Transfer
TLDR
A model is taught to take normal text and “translate” it into humorous text and is found to be human written comparable with that of the human edited headlines and is significantly better than random, indicating that this dataset does indeed provide potential for future humor generation systems.
The rJokes Dataset: a Large Scale Humor Collection
TLDR
A collection of over 550,000 jokes posted over an 11 year period on the Reddit r/Jokes subreddit, providing a large scale humor dataset that can be used for a myriad of tasks and introducing this dataset as a task for future work, where models learn to predict the level of humor in a joke.
Streaming Models for Joint Speech Recognition and Translation
TLDR
An end-to-end streaming ST model based on a re-translation approach is developed and a novel inference method for the joint case is introduced, interleaving both transcript and translation in generation and removing the need to use separate decoders.
You Don’t Have Time to Read This: An Exploration of Document Reading Time Prediction
TLDR
It is found that despite extensive research showing that word level reading time can be most effectively predicted by neural networks, larger scale text can be easily and most accurately predicted by one factor, the number of words.
End-to-End Speech Translation for Code Switched Speech
TLDR
This work focuses on code switching in the context of English/Spanish conversations for the task of speech translation (ST), generating and evaluating both transcript and translation, and creates a novel ST corpus derived from existing public data sets.
Predicting suicidal thoughts and behavior among adolescents using the risk and protective factor framework: A large-scale machine learning approach
TLDR
Results indicate that certain risk and protective factors, such as adolescents being threatened or harassed through digital media or bullied at school, and exposure or involvement in serious arguments and yelling at home are the leading predictors of STB and can help narrow and reaffirm priority prevention programming and areas of focused policymaking.
When to Use Multi-Task Learning vs Intermediate Fine-Tuning for Pre-Trained Encoder Transfer Learning
Transfer learning (TL) in natural language processing (NLP) has seen a surge of interest in recent years, as pre-trained models have shown an impressive ability to transfer to novel tasks. Three main
Exploring the Relationship Between Algorithm Performance, Vocabulary, and Run-Time in Text Classification
TLDR
It is shown that some individual methods can reduce run-time with no loss of accuracy, while some combinations of methods can trade 2-5% of the accuracy for up to a 65% reduction of run- time.