Computation of selectional preferences, the admissible argument values for a relation, is a well studied NLP task with wide applicability. We present LDA-SP, the first LDA-based approach to computing selectional preferences. By simultaneously inferring latent topics and topic distributions over relations, LDA-SP combines the benefits of previous approaches: it is competitive with the non-class-based methods in predictive power, and also produces human-interpretable classes describing each relation’s preferences similar to traditional class-based approaches. We compare LDA-SP to several state-ofthe-art methods achieving a 62% increase in recall at 0.9 precision over mutual information (Erk, 2007). We also evaluate LDA-SP’s effectiveness at the task of filtering improper applications of inference rules (Pantel et al., 2007), where we show a substantial improvement in performance over Pantel et al.’s system.