From distributional semantics to feature norms: grounding semantic models in human perceptual data


Multimodal semantic models attempt to ground distributional semantics through the integration of visual or perceptual information. Feature norms provide useful insight into human concept acquisition, but cannot be used to ground largescale semantics because they are expensive to produce. We present an automatic method for predicting feature norms for new concepts by learning a mapping from a text-based distributional semantic space to a space built using feature norms. Our experimental results are promising, and show that we are able to generalise feature-based representations for new concepts. This work opens up the possibility of developing large-scale semantic models grounded in a proxy for human perceptual data. Classical distributional semantic models [1, 2] represent the meanings of words by relying on their statistical distribution in text [3, 4, 5, 6]. Despite performing well in a wide range of semantic tasks, a common criticism is that by only representing meaning through linguistic input these models are not grounded in perception, since the words only exist in relation to each other and are not grounded in the physical world. This concern is motivated by the increasing evidence in the cognitive science literature that the semantics of words is derived not only from our exposure to the language, but also through our interactions with the world. One way to overcome this issue would be to include perceptual information in the semantic models [7]. It has already been shown, for example, that models that learn from both visual and linguistic input improve performance on a variety of tasks such as word association or semantic similarity [8]. However, the visual modality alone cannot capture all perceptual information that humans possess. A more cognitively sound representation of human intuitions in relation to particular concepts is given by semantic property norms, also known as semantic feature norms. A number of property norming studies [9, 10, 11] have focused on collecting feature norms for various concepts in order to allow for empirical testing of psychological semantic theories. In these studies humans are asked to identify, for a given concept, its most important attributes. For example, given the concept AIRPLANE, one might say that its most important features are to_fly, has_wings and is_used_for_transport. These datasets provide a valuable insight into human concept representation and have been successfully used for tasks such as text simplification for limited vocabulary groups, personality modelling and metaphor processing, as well as a proxy for modelling perceptual information [12, 13]. Despite their advantages, semantic feature norms are not widely used in computational linguistics, mainly because they are expensive to produce and have only been collected for small sets of words; moreover the set of features that one can produce for a given concept is not restricted. In [14], the authors construct a three-way multimodal model, integrating textual, feature and visual modalities. However, this method is restricted to the same disadvantages of feature norm datasets. There have been some attempts at automatically generating feature-norm-like semantic representations for

Extracted Key Phrases

4 Figures and Tables

Cite this paper

@inproceedings{Fagarasan2015FromDS, title={From distributional semantics to feature norms: grounding semantic models in human perceptual data}, author={Luana Fagarasan and Eva Maria Vecchi and Stephen Clark}, booktitle={IWCS}, year={2015} }