• Corpus ID: 235727712

On-Demand and Lightweight Knowledge Graph Generation - a Demonstration with DBpedia

@article{Brockmeier2021OnDemandAL,
  title={On-Demand and Lightweight Knowledge Graph Generation - a Demonstration with DBpedia},
  author={Malte Brockmeier and Yawen Liu and Sunita Pateer and Sven Hertling and Heiko Paulheim},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.00873}
}
Modern large-scale knowledge graphs, such as DBpedia, are datasets which require large computational resources to serve and process. Moreover, they often have longer release cycles, which leads to outdated information in those graphs. In this paper, we present DBpedia on Demand – a system which serves DBpedia resources on demand without the need to materialize and store the entire graph, and which even provides limited querying functionality. 

Figures and Tables from this paper

References

SHOWING 1-10 OF 11 REFERENCES
DBkWik: extracting and integrating knowledge from thousands of Wikis
TLDR
This paper shows how to create one consolidated knowledge graph, called DBkWik, from thousands of Wikis, and shows that the resulting large-scale knowledge graph is complementary to DBpedia.
RDF2Vec Light - A Lightweight Approachfor Knowledge Graph Embeddings
TLDR
RDF2Vec Light is presented, a lightweight embedding approach based on RDF2 Vec which generates vectors for only a subset of entities which allows the application of embeddings of very large knowledge graphs in scenarios where such embedDings were not possible before due to a significantly lower runtime and significantly reduced hardware requirements.
Linked data quality of DBpedia, Freebase, OpenCyc, Wikidata, and YAGO
TLDR
Data quality criteria according to which KGs can be analyzed and analyze and compare the above mentioned KGs are provided and a framework for finding the most suitable KG for a given setting is proposed.
Knowledge Graphs on the Web - an Overview
TLDR
This chapter provides an overview and comparison of those publicly available knowledge graphs, and gives insights into their contents, size, coverage, and overlap.
Triple Pattern Fragments: A low-cost knowledge graph interface for the Web
DBpedia and the live extraction of structured data from Wikipedia
TLDR
DBpedia‐Live publishes the newly added/deleted triples in files, in order to enable synchronization between the DBpedia endpoint and other DBpedia mirrors.
DBpedia - A large-scale, multilingual knowledge base extracted from Wikipedia
TLDR
An overview of the DBpedia community project is given, including its architecture, technical implementation, maintenance, internationalisation, usage statistics and applications, including DBpedia one of the central interlinking hubs in the Linked Open Data (LOD) cloud.
Language-Agnostic Relation Extraction from Abstracts in Wikis
TLDR
A language-agnostic approach that exploits background knowledge from the graph instead of language-specific techniques and builds machine learning models only from language-independent features is presented.
YAGO: A Multilingual Knowledge Base from Wikipedia, Wordnet, and Geonames
TLDR
This paper explains how YAGO is built from its sources, how its quality is evaluated, how a user can access it, and how other projects utilize it.
...
1
2
...