• Publications
  • Influence
COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs
TLDR
It is proposed that manually constructed CSKGs will never achieve the coverage necessary to be applicable in all situations encountered by NLP agents, and a new evaluation framework for testing the utility of KGs based on how effectively implicit knowledge representations can be learned from them is proposed.
Understanding Few-Shot Commonsense Knowledge Models
TLDR
This work investigates training commonsense knowledge models in a fewshot setting with limited tuples per commonsense relation in the graph and finds that human quality ratings for knowledge produced from a few-shot trained system can achieve performance within 6% of knowledgeproduced from fully supervised systems.
Cracking the Contextual Commonsense Code: Understanding Commonsense Reasoning Aptitude of Deep Contextual Representations
TLDR
This work probes and challenges several aspects of BERT's commonsense representation abilities, and develops a method of fine-tuning knowledge graphs embeddings alongside BERT and shows the continued importance of explicit knowledge graphs.
On Symbolic and Neural Commonsense Knowledge Graphs
TLDR
It is proposed that manually constructed CSKGs will never achieve the coverage necessary to be applicable in all situations encountered by NLP agents, and a new evaluation framework for testing the utility of KGs based on how effectively implicit knowledge representations can be learned from them is proposed.
Analyzing Commonsense Emergence in Few-shot Knowledge Models
TLDR
The results show that commonsense knowledge models can rapidly adapt from limited examples, indicating that KG fine-tuning serves to learn an interface to encoded knowledge learned during pretraining.
Discourse Understanding and Factual Consistency in Abstractive Summarization
TLDR
A general framework for abstractive summarization with factual consistency and distinct modeling of the narrative flow in an output summary is introduced and empirical results demonstrate that Co-opNet learns to summarize with considerably improved global coherence compared to competitive baselines.
Edited Media Understanding: Reasoning About Implications of Manipulated Images
TLDR
A wide variety of vision-and-language models are evaluated for the task of Edited Media Understanding, requiring models to answer open-ended questions that capture the intent and implications of an image edit, and a new model PELICAN is introduced, which builds upon recent progress in pretrained multimodal representations.
Understanding Commonsense Inference Aptitude of Deep Contextual Representations
Edited Media Understanding Frames: Reasoning About the Intent and Implications of Visual Misinformation
TLDR
Examining Edited Media Frames, a new formalism to understand visual media manipulation as structured annotations with respect to the intents, emotional reactions, attacks on individuals, and the overall implications of disinformation, yields promising results.
Jeff Da at COIN - Shared Task
  • Jeff Da
  • Psychology
    Proceedings of the First Workshop on Commonsense…
  • 2019
...
1
2
...