A Latent Dirichlet Allocation Method for Selectional Preferences
@inproceedings{Ritter2010ALD, title={A Latent Dirichlet Allocation Method for Selectional Preferences}, author={Alan Ritter and Mausam and Oren Etzioni}, booktitle={ACL}, year={2010} }
The computation of selectional preferences, the admissible argument values for a relation, is a well-known NLP task with broad applicability. We present LDA-SP, which utilizes LinkLDA (Erosheva et al., 2004) to model selectional preferences. By simultaneously inferring latent topics and topic distributions over relations, LDA-SP combines the benefits of previous approaches: like traditional class-based approaches, it produces human-interpretable classes describing each relation's preferences…
187 Citations
Probabilistic Distributional Semantics with Latent Variable Models
- Computer ScienceCL
- 2014
A probabilistic framework for acquiring selectional preferences of linguistic predicates and for using the acquired representations to model the effects of context on word meaning is described and it is argued that Probabilistic methods provide an effective and flexible methodology for distributional semantics.
Neural Models of Selectional Preferences for Implicit Semantic Role Labeling
- Computer ScienceLREC
- 2018
It is concluded that, even though multi-way selectional preference improves results for predicting explicit semantic roles compared to one-way Selectional preference, it harms performance for implicit roles.
Selectional Preferences for Semantic Role Classification
- Computer ScienceCL
- 2013
This paper demonstrates that the SRC task is better modeled by SP models centered on both verbs and prepositions, rather than verbs alone, and explores a range of models based on WordNet and distributional-similarity SPs.
Latent Variable Models of Selectional Preference
- Computer ScienceACL
- 2010
Three models related to Latent Dirichlet Allocation, a proven method for modelling document-word cooccurrences, are presented and evaluated on datasets of human plausibility judgements and perform very competitively, especially for infrequent predicate-argument combinations.
Domain Adaptation of a Dependency Parser with a Class-Class Selectional Preference Model
- Computer ScienceACL 2012
- 2012
This paper uses Latent Dirichlet Allocations (LDA) to learn a domain-specific Selectional Preference model in the target domain using un-annotated data and applies this method for adapting Easy First, a fast non-directional parser trained on WSJ, to the biomedical domain.
Exploring Supervised LDA Models for Assigning Attributes to Adjective-Noun Phrases
- Computer ScienceEMNLP
- 2011
This paper introduces an attribute selection task as a way to characterize the inherent meaning of property-denoting adjectives in adjective-noun phrases, such as e.g. hot in hot summer denoting the…
How Relevant Are Selectional Preferences for Transformer-based Language Models?
- Computer ScienceCOLING
- 2020
It is found that certain head words have a strong correlation and that masking all words but the head word yields the most positive correlations in most scenarios, which indicates that the semantics of the predicate is indeed an integral and influential factor for the selection of the argument.
The Impact of Selectional Preference Agreement on Semantic Relational Similarity
- Computer ScienceIWCS
- 2013
To determine selectional preferences, semantic classes are induced through a Latent Dirichlet Allocation method that operates on dependency parse contexts of single words to assign relational similarities to pairs of words.
Lexical Inference over Multi-Word Predicates: A Distributional Approach
- Computer ScienceACL
- 2014
Focusing on the supervised identification of lexical inference relations, this work compares against state-of-the-art baselines that consider a single sub-set of an MWP, obtaining substantial improvements.
Learning Full-Sentence Co-Related Verb Argument Preferences from Web Corpora
- Computer Science
- 2012
The authors use an ensemble model for machine learning using discriminative and generative models, using co-occurrence features, and semantic features in different arrangements to answer questions about the number of optimal topics required for PLSI and LDA models, as well as thenumber of co-Occurrences that should be required for improving performance.
References
SHOWING 1-10 OF 39 REFERENCES
Latent Variable Models of Selectional Preference
- Computer ScienceACL
- 2010
Three models related to Latent Dirichlet Allocation, a proven method for modelling document-word cooccurrences, are presented and evaluated on datasets of human plausibility judgements and perform very competitively, especially for infrequent predicate-argument combinations.
Automatic labeling of multinomial topic models
- Computer ScienceKDD '07
- 2007
Probabilistic approaches to automatically labeling multinomial topic models in an objective way are proposed and can be applied to labeling topics learned through all kinds of topic models such as PLSA, LDA, and their variations.
Finding scientific topics
- Computer ScienceProceedings of the National Academy of Sciences of the United States of America
- 2004
A generative model for documents is described, introduced by Blei, Ng, and Jordan, and a Markov chain Monte Carlo algorithm is presented for inference in this model, which is used to analyze abstracts from PNAS by using Bayesian model selection to establish the number of topics.
Automatic Evaluation of Topic Coherence
- Computer ScienceHLT-NAACL
- 2010
A simple co-occurrence measure based on pointwise mutual information over Wikipedia data is able to achieve results for the task at or nearing the level of inter-annotator correlation, and that other Wikipedia-based lexical relatedness methods also achieve strong results.
Efficient methods for topic model inference on streaming document collections
- Computer ScienceKDD
- 2009
Empirical results indicate that SparseLDA can be approximately 20 times faster than traditional LDA and provide twice the speedup of previously published fast sampling methods, while also using substantially less memory.
Selectional Preference and Sense Disambiguation
- Computer ScienceWorkshop On Tagging Text With Lexical Semantics: Why, What, And How?
- 1997
This paper explores how a statistical model of selectional preference, requiring neither manual annotation of selection restrictions nor supervised training, can be used in sense disambiguation, and combines statistical and knowledge-based methods.
Latent Variable Models of Concept-Attribute Attachment
- Computer ScienceACL
- 2009
A set of Bayesian methods for automatically extending the WordNet ontology with new concepts and annotating existing concepts with generic property fields, or attributes is presented.
Class-Based Probability Estimation Using a Semantic Hierarchy
- Computer ScienceNAACL
- 2001
This article concerns the estimation of a particular kind of probability, namely, the probability of a noun sense appearing as a particular argument of a predicate, and a procedure is developed that uses a chi-square test to determine a suitable level of generalization.