• Publications
  • Influence
COMET: Commonsense Transformers for Automatic Knowledge Graph Construction
TLDR
This investigation reveals promising results when implicit knowledge from deep pre-trained language models is transferred to generate explicit knowledge in commonsense knowledge graphs, and suggests that using generative commonsense models for automatic commonsense KB completion could soon be a plausible alternative to extractive methods.
Deep Communicating Agents for Abstractive Summarization
TLDR
Empirical results demonstrate that multiple communicating encoders lead to a higher quality summary compared to several strong baselines, including those based on a single encoder or multiple non-communicating encoder.
COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs
TLDR
It is proposed that manually constructed CSKGs will never achieve the coverage necessary to be applicable in all situations encountered by NLP agents, and a new evaluation framework for testing the utility of KGs based on how effectively implicit knowledge representations can be learned from them is proposed.
On the Opportunities and Risks of Foundation Models
TLDR
This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities, to their applications, and what they are even capable of due to their emergent properties.
Simulating Action Dynamics with Neural Process Networks
TLDR
This work introduces Neural Process Networks to understand procedural text through (neural) simulation of action dynamics, and complements existing memory architectures with dynamic entity tracking by explicitly modeling actions as state transformers.
Learning to Write with Cooperative Discriminators
TLDR
Human evaluation demonstrates that text generated by the RNN system is preferred over that of baselines by a large margin and significantly enhances the overall coherence, style, and information content of the generated text.
Commonsense Knowledge Base Completion with Structural and Semantic Context
TLDR
This paper investigates two key ideas: (1) learning from local graph structure, using graph convolutional networks and automatic graph densification and (2) transfer learning from pre-trained language models to knowledge graphs for enhanced contextual representation of knowledge.
QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering
TLDR
This work proposes a new model, QA-GNN, which addresses the problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) through two key innovations: relevance scoring and joint reasoning.
Modeling Naive Psychology of Characters in Simple Commonsense Stories
TLDR
A new annotation framework is introduced to explain naive psychology of story characters as fully-specified chains of mental states with respect to motivations and emotional reactions and establishes baseline performance on several new tasks, suggesting avenues for future research.
Reasoning about Actions and State Changes by Injecting Commonsense Knowledge
TLDR
This paper shows how the predicted effects of actions in the context of a paragraph can be improved in two ways: by incorporating global, commonsense constraints (e.g., a non-existent entity cannot be destroyed), and by biasing reading with preferences from large-scale corpora.
...
1
2
3
4
...