Towards Holistic Concept Representations: Embedding Relational Knowledge, Visual Attributes, and Distributional Word Semantics

@inproceedings{Thoma2017TowardsHC,
  title={Towards Holistic Concept Representations: Embedding Relational Knowledge, Visual Attributes, and Distributional Word Semantics},
  author={Steffen Thoma and Achim Rettinger and Fabian Both},
  booktitle={SEMWEB},
  year={2017}
}
Knowledge Graphs (KGs) effectively capture explicit relational knowledge about individual entities. [] Key Result Our empirical results show that a joined concept representation provides measurable benefits for (i) semantic similarity benchmarks, since it shows a higher correlation with the human notion of similarity than uni- or bi-modal representations, and (ii) entity-type prediction tasks, since it clearly outperforms plain KG embeddings.
Cross-modal Knowledge Transfer: Improving the Word Embedding of Apple by Looking at Oranges
TLDR
The empirical results of the knowledge transfer approach demonstrate that word embeddings do benefit from extrapolating information across modalities even for concepts that are not represented in the other modalities, and this applies most to concrete concepts while abstract concepts benefit most if aligned concepts are available in theother modalities.
Answering Visual-Relational Queries in Web-Extracted Knowledge Graphs
TLDR
This work introduces novel combinations of convolutional networks and knowledge graph embedding methods to answer visual-relational queries in web-extracted knowledge graphs and explores a zero-shot learning scenario where an image of an entirely new entity is linked with multiple relations to entities of an existing KG.
Incorporating Literals into Knowledge Graph Embeddings
TLDR
Despite its simplicity, LiteralE proves to be an effective way to incorporate literal information into existing embedding based methods, improving their performance on different standard datasets, which are augmented with their literals and provide as testbed for further research.
From Vision to Content: Construction of Domain-Specific Multi-Modal Knowledge Graph
TLDR
A path-based concept extension and fusion strategy is proposed based on the conceptual hierarchies of WordNet and DBpedia to obtain the effective extension concepts as well as the links between them, increasing the scale of the knowledge graph and enhancing the correlation between images.
Aligning Knowledge Base and Document Embedding Models Using Regularized Multi-Task Learning
TLDR
This article proposes KADE, a solution based on a regularized multi-task learning of KB and document embeddings that effectively aligns document and entitiesembeddings, while maintaining the characteristics of the embedding models.
Illustrative Language Understanding: Large-Scale Visual Grounding with Image Search
TLDR
Picturebook, a large-scale lookup operation to ground language via ‘snapshots’ of the authors' physical world accessed through image search, is introduced and it is shown that gate activations corresponding to Picturebook embeddings are highly correlated to human judgments of concreteness ratings.
KADE : Aligning Knowledge Base and Document Embedding Models using Regularized MultiTask Learning ?
TLDR
This article proposes KADE, a solution based on a regularized multi-task learning of KB and document embeddings that effectively aligns document and entitieembeddings, while maintaining the characteristics of the embedding models.
Visual Concept-Metaconcept Learning
TLDR
This paper proposes the visual concept-metaconcept learner (VCML) for joint learning of concepts and metaconcepts from images and associated question-answer pairs, to exploit the bidirectional connection between visual concepts and meetaconCEPTs.
Visual Concept-Metaconcept Learning
TLDR
This paper proposes the visual concept-metaconcept learner (VCML) for joint learning of concepts and metaconcepts from images and associated question-answer pairs, to exploit the bidirectional connection between visual concepts and meetaconCEPTs.
...
1
2
3
...

References

SHOWING 1-10 OF 37 REFERENCES
Multimodal Distributional Semantics
TLDR
This work proposes a flexible architecture to integrate text- and image-based distributional information, and shows in a set of empirical tests that the integrated model is superior to the purely text-based approach, and it provides somewhat complementary semantic information with respect to the latter.
Learning Abstract Concept Embeddings from Multi-Modal Data: Since You Probably Can’t See What I Mean
TLDR
This work presents a new means of extending the scope of multi-modal models to more commonly-occurring abstract lexical concepts via an approach that learns multimodal embeddings, and outperforms previous approaches in combining input from distinct modalities.
Learning Entity and Relation Embeddings for Knowledge Graph Completion
TLDR
TransR is proposed to build entity and relation embeddings in separate entity space and relation spaces to build translations between projected entities and to evaluate the models on three tasks including link prediction, triple classification and relational fact extraction.
Type-Constrained Representation Learning in Knowledge Graphs
TLDR
This work integrated prior knowledge in form of type-constraints in various state of the art latent variable approaches and shows that prior knowledge on relation-types significantly improves these models up to 77% in link-prediction tasks.
Learning Structured Embeddings of Knowledge Bases
TLDR
A learning process based on an innovative neural network architecture designed to embed any of these symbolic representations into a more flexible continuous vector space in which the original knowledge is kept and enhanced would allow data from any KB to be easily used in recent machine learning methods for prediction and information retrieval.
A semantic matching energy function for learning with multi-relational data
TLDR
A new neural network architecture designed to embed multi-relational graphs into a flexible continuous vector space in which the original data is kept and enhanced, demonstrating that it can scale up to tens of thousands of nodes and thousands of types of relation.
Large-scale learning of word relatedness with constraints
TLDR
A large-scale data mining approach to learning word-word relatedness, where known pairs of related words impose constraints on the learning process, and learns for each word a low-dimensional representation, which strives to maximize the likelihood of a word given the contexts in which it appears.
Single or Multiple? Combining Word Representations Independently Learned from Text and WordNet
TLDR
This paper learns word representations from text and WordNet independently, and then explores simple and sophisticated methods to combine them, showing that, in the case of WordNet, learning word representations separately is preferable to learning one single representation space or adding WordNet information directly.
Translating Embeddings for Modeling Multi-relational Data
TLDR
TransE is proposed, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities, which proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases.
GloVe: Global Vectors for Word Representation
TLDR
A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.
...
1
2
3
4
...