Learning Grounded Meaning Representations with Autoencoders

Abstract

In this paper we address the problem of grounding distributional representations of lexical meaning. We introduce a new model which uses stacked autoencoders to learn higher-level embeddings from textual and visual input. The two modalities are encoded as vectors of attributes and are obtained automatically from text and images, respectively. We evaluate our model on its ability to simulate similarity judgments and concept categorization. On both tasks, our approach outperforms baselines and related models.

Extracted Key Phrases

7 Figures and Tables

020402014201520162017
Citations per Year

82 Citations

Semantic Scholar estimates that this publication has 82 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Silberer2014LearningGM, title={Learning Grounded Meaning Representations with Autoencoders}, author={Carina Silberer and Mirella Lapata}, booktitle={ACL}, year={2014} }