Audio-Based Distributional Representations of Meaning Using a Fusion of Feature Encodings


Recently a “Bag-of-Audio-Words” approach was proposed [1] for the combination of lexical features with audio clips in a multimodal semantic representation, i.e., an Audio Distributional Semantic Model (ADSM). An important step towards the creation of ADSMs is the estimation of the semantic distance between clips in the acoustic space, which is especially challenging given the diversity of audio collections. In this work, we investigate the use of different feature encodings in order to address this challenge following a two-step approach. First, an audio clip is categorized with respect to three classes, namely, music, speech and other. Next, the feature encodings are fused according to the posterior probabilities estimated in the previous step. Using a collection of audio clips annotated with tags we derive a mapping between words and audio clips. Based on this mapping and the proposed audio semantic distance, we construct an ADSM model in order to compute the distance between words (lexical semantic similarity task). The proposed model is shown to significantly outperform (23.6% relative improvement in correlation coefficient) the state-of-the-art results reported in the literature.

DOI: 10.21437/Interspeech.2016-839

5 Figures and Tables

Cite this paper

@inproceedings{Karamanolakis2016AudioBasedDR, title={Audio-Based Distributional Representations of Meaning Using a Fusion of Feature Encodings}, author={Giannis Karamanolakis and Elias Iosif and Athanasia Zlatintsi and Aggelos Pikrakis and Alexandros Potamianos}, booktitle={INTERSPEECH}, year={2016} }