DBpedia - A large-scale, multilingual knowledge base extracted from Wikipedia
@article{Lehmann2015DBpediaA, title={DBpedia - A large-scale, multilingual knowledge base extracted from Wikipedia}, author={Jens Lehmann and Robert Isele and Max Jakob and Anja Jentzsch and Dimitris Kontokostas and Pablo N. Mendes and Sebastian Hellmann and Mohamed Morsey and Patrick van Kleef and S. Auer and Christian Bizer}, journal={Semantic Web}, year={2015}, volume={6}, pages={167-195} }
The DBpedia community project extracts structured, multilingual knowledge from Wikipedia and makes it freely available on the Web using Semantic Web and Linked Data technologies. The project extracts knowledge from 111 different language editions of Wikipedia. The largest DBpedia knowledge base which is extracted from the English edition of Wikipedia consists of over 400 million facts that describe 3.7 million things. The DBpedia knowledge bases that are extracted from the other 110 Wikipedia…
Figures and Tables from this paper
2,619 Citations
Entity Extraction from Wikipedia List Pages
- Computer ScienceESWC
- 2020
This paper presents a two-phased approach for the extraction of entities from Wikipedia’s list pages, which have proven to serve as a valuable source of information.
Identifying Global Common Concepts of DBpedia Ontology to Enhance Multilingual Ontologized Space Expansion
- Computer Science
- 2015
Experimental results demonstrate that CS-RAL identifies a significant part of the DBpedia ontology for different languages, and the proposed method is called “concept significance rank aggregation across languages” (CS-RAL).
A Novel Method to Predict Type for DBpedia Entity
- Computer ScienceACIIDS
- 2018
This paper proposes a method to predict the entity type based on a novel conformity measure and evaluates the method based on database extracted from aggregating multilingual resources and compares it with human perception in predicting type for an entity.
Towards Updating Wikipedia via DBpedia Mappings, SPARQL
- Computer ScienceAMW
- 2016
The declarative WikiDBpedia framework (WDF) is defined as a pair (M, T ) where M is a schema mapping between the structured Wiki data and DBpedia, and T is a DBpedia TBox, and the language used to formalize the TBox is the tgds language of Σ and the Wiki schema W.
Updating Wikipedia via DBpedia Mappings and SPARQL
- Computer ScienceESWC
- 2017
This paper provides a formalization of DBpedia as an Ontology-Based Data Management framework and study its computational properties, and provides a novel approach to the inherently intractable update translation problem, leveraging the pre-existent data for disambiguating updates.
Improving wikipedia-based place name disambiguation in short texts using structured data from DBpedia
- Computer ScienceGIR
- 2014
This paper presents an approach for combining Wikipedia and DBpedia to disambiguate place names in short texts, and argues that a combination of both performs better than each of them alone.
Extending DBpedia with List Structures in Wikipedia Articles
- Computer Science
- 2016
An information extraction system using the list structure is developed and more than 20 million triples are extracted using section titles as predicates, suggesting that there is ample potential to significantly expand the coverage of DBpedia.
Fine-grained Type Prediction of Entities using Knowledge Graph Embeddings
- Computer Science
- 2019
This thesis explores and evaluates different approaches for type prediction of entities in DBpedia - the unsupervised approach vector similarity using knowledge graph embeddings, as well as the supervised one - CNN classification.
Supporting Multilingual Semantic Web Services Discovery by Consuming Data from DBpedia Knowledge Base
- Computer ScienceIPAC
- 2015
This paper proposes to overcome language barrier by supporting multilingual Web services discovery using DBpedia, which is a cross-domain multilingual knowledge base, and takes advantage of semantic and multilingual information provided by DBpedia to enable cross-language semantic Web servicesiscovery.
Utilization of DBpedia Mapping in Cross Lingual Wikipedia Infobox Completion
- Computer ScienceAustralasian Conference on Artificial Intelligence
- 2016
This paper combined mapping information from DBpedia with an instance-based method to align the existing Korean-English infobox attribute-value pairs as well as to generate new pairs from the Korean version to fill missing information in the English version.
References
SHOWING 1-10 OF 55 REFERENCES
DBpedia Live Extraction
- Computer ScienceOTM Conferences
- 2009
DBpedia is extended with a live extraction framework, which is capable of processing tens of thousands of changes per day in order to consume the constant stream of Wikipedia updates and allows direct modifications of the knowledge base and closer interaction of users with DBpedia.
DBpedia and the live extraction of structured data from Wikipedia
- Computer ScienceProgram
- 2012
DBpedia‐Live publishes the newly added/deleted triples in files, in order to enable synchronization between the DBpedia endpoint and other DBpedia mirrors.
Cross-lingual knowledge linking across wiki knowledge bases
- Computer ScienceWWW
- 2012
The problem of cross-lingual knowledge linking is studied and a linkage factor graph model is presented, showing that this approach can achieve a high precision of 85.8% with a recall of 88.1%.
Automatically refining the wikipedia infobox ontology
- Computer ScienceWWW
- 2008
KOG, an autonomous system for refining Wikipedia's infobox-class ontology, is introduced, using both SVMs and a more powerful joint-inference approach expressed in Markov Logic Networks to build a rich ontology.
Wikipedia Mining Wikipedia as a Corpus for Knowledge Extraction
- Computer Science
- 2008
A comprehensive, panoramic view of Wikipedia as a Web corpus is taken since almost all previous researches are just exploiting parts of the Wikipedia characteristics.
Multipedia: enriching DBpedia with multimedia information
- Computer ScienceK-CAP '11
- 2011
This paper addresses the problem of how to enrich ontology instances with candidate images retrieved from existing Web search engines by tapping into the Wikipedia corpus to gather context information for DBpedia instances and takes advantage of image tagging information when this is available to calculate semantic relatedness between instances and candidate images.
Overview of the TAC 2010 Knowledge Base Population Track
- Computer Science
- 2010
An overview of the task definition and annotation challenges associated with KBP2010 is provided and the evaluation results and lessons that are learned are discussed based on detailed analysis.
Extracting Lexical Semantic Knowledge from Wikipedia and Wiktionary
- Computer ScienceLREC
- 2008
This paper presents two application programming interfaces for Wikipedia and Wiktionary which are especially designed for mining the rich lexical semantic information dispersed in the knowledge bases, and provide efficient and structured access to the available knowledge.
Experiments with Wikipedia Cross-Language Data Fusion
- Computer ScienceSFSW@ESWC
- 2009
A software framework for fusing RDF datasets based on different conflict resolution strategies is presented and the framework to fuse infobox data that has been extracted from the English, German, Italian and French editions of Wikipedia is applied.