Did the Cat Drink the Coffee? Challenging Transformers with Generalized Event Knowledge

@inproceedings{Pedinotti2021DidTC,
  title={Did the Cat Drink the Coffee? Challenging Transformers with Generalized Event Knowledge},
  author={Paolo Pedinotti and Giulia Rambelli and Emmanuele Chersoni and Enrico Santus and Alessandro Lenci and Philippe Blache},
  booktitle={STARSEM},
  year={2021}
}
Prior research has explored the ability of computational models to predict a word semantic fit with a given predicate. While much work has been devoted to modeling the typicality relation between verbs and arguments in isolation, in this paper we take a broader perspective by assessing whether and to what extent computational approaches have access to the information about the typicality of entire events and situations described in language (Generalized Event Knowledge). Given the recent… Expand
1 Citations

Figures and Tables from this paper

Decoding Word Embeddings with Brain-Based Semantic Features
TLDR
This work explores the semantic properties encoded in word embeddings by mapping them onto interpretable vectors, consisting of explicit and neurobiologically motivated semantic features, and proposes a new and simple method to carve humaninterpretable semantic representations from distributional vectors. Expand

References

SHOWING 1-10 OF 45 REFERENCES
Event Knowledge in Sentence Processing: A New Dataset for the Evaluation of Argument Typicality
TLDR
DTFit (Dynamic Thematic Fit), a dataset of human ratings for verb-role fillers in a given event context, is introduced, with the aim of providing a rigorous benchmark for context-sensitive argument typicality modeling. Expand
Comparing Probabilistic, Distributional and Transformer-Based Models on Logical Metonymy Interpretation
TLDR
This paper tackles the problem of logical metonymy interpretation, that is, the retrieval of the covert event via computational methods, and compares different types of models, including the probabilistic and the distributional ones previously introduced in the literature on the topic. Expand
Can a Gorilla Ride a Camel? Learning Semantic Plausibility from Text
TLDR
This work shows that large pretrained language models are in fact effective at modeling physical plausibility in the supervised setting and creates a training set by extracting attested events from a large corpus, and believes results could be further improved by injecting explicit commonsense knowledge into a distributional model. Expand
How Relevant Are Selectional Preferences for Transformer-based Language Models?
TLDR
It is found that certain head words have a strong correlation and that masking all words but the head word yields the most positive correlations in most scenarios, which indicates that the semantics of the predicate is indeed an integral and influential factor for the selection of the argument. Expand
Event-based plausibility immediately influences on-line language comprehension.
TLDR
This research demonstrates that conceptual event-based expectations are computed and used rapidly and dynamically during on-line language comprehension, and concludes that selectional restrictions may be best considered as event- based conceptual knowledge rather than lexical-grammatical knowledge. Expand
Not all arguments are processed equally: a distributional model of argument complexity
TLDR
This work built a Distributional Semantic Model to compute a compositional cost function for the sentence unification process and reveals that the model is able to account for semantic phenomena such as the context-sensitive update of argument expectations and the processing of logical metonymies. Expand
Modeling the Influence of Thematic Fit (and Other Constraints) in On-line Sentence Comprehension
The time-course with which readers use event-specific world knowledge (thematic fit) to resolve structural ambiguity was explored through experiments and implementation of constraint-based andExpand
Thematic fit bits: Annotation quality and quantity for event participant representation
TLDR
It is discovered that higher annotation quality dramatically reduces the authors' data requirement while demonstrating better supervised predicate-argument classification, and set a new state-of-the-art in event modeling, using a fraction of the data. Expand
A Flexible, Corpus-Driven Model of Regular and Inverse Selectional Preferences
TLDR
This work presents a vector space–based model for selectional preferences that predicts plausibility scores for argument headwords and obtains consistent benefits from using the disambiguation and semantic role information provided by a semantically tagged primary corpus. Expand
An exploration of semantic features in an unsupervised thematic fit evaluation framework
English. Thematic fit is the extent to which an entity fits a thematic role in the semantic frame of an event, e.g., how well humans would rate “knife” as an instrument of an event of cutting. WeExpand
...
1
2
3
4
5
...