DBpedia - A large-scale, multilingual knowledge base extracted from Wikipedia

@article{Lehmann2015DBpediaA,
  title={DBpedia - A large-scale, multilingual knowledge base extracted from Wikipedia},
  author={Jens Lehmann and Robert Isele and Max Jakob and Anja Jentzsch and Dimitris Kontokostas and Pablo N. Mendes and Sebastian Hellmann and Mohamed Morsey and Patrick van Kleef and S. Auer and Christian Bizer},
  journal={Semantic Web},
  year={2015},
  volume={6},
  pages={167-195}
}
The DBpedia community project extracts structured, multilingual knowledge from Wikipedia and makes it freely available on the Web using Semantic Web and Linked Data technologies. The project extracts knowledge from 111 different language editions of Wikipedia. The largest DBpedia knowledge base which is extracted from the English edition of Wikipedia consists of over 400 million facts that describe 3.7 million things. The DBpedia knowledge bases that are extracted from the other 110 Wikipedia… 
EXTENDING LINKED OPEN DATA RESOURCES EXPLOITING WIKIPEDIA AS SOURCE OF INFORMATION
TLDR
The objective of this thesis is to define a methodology to increase the coverage of DBpedia in different languages, using various techniques to reach two different goals: automatic mapping and DBpedia dataset completion.
Entity Extraction from Wikipedia List Pages
TLDR
This paper presents a two-phased approach for the extraction of entities from Wikipedia’s list pages, which have proven to serve as a valuable source of information.
Discovering Wikipedia Conventions Using DBpedia Properties
TLDR
A collaborative recommender system approach named BlueFinder is presented, to enhance Wikipedia content with DBpedia properties, which assists Wikipedia contributors to add missing relations between articles, and consequently, it improves Wikipedia content.
Identifying Global Common Concepts of DBpedia Ontology to Enhance Multilingual Ontologized Space Expansion
The amount of data on the Web has recently increased, accompanied by a paradigm shift from the publishing of isolated data to the publishing of interlinked data. Linked Open Data (LOD) provides
A Novel Method to Predict Type for DBpedia Entity
TLDR
This paper proposes a method to predict the entity type based on a novel conformity measure and evaluates the method based on database extracted from aggregating multilingual resources and compares it with human perception in predicting type for an entity.
Towards Updating Wikipedia via DBpedia Mappings, SPARQL
TLDR
The declarative WikiDBpedia framework (WDF) is defined as a pair (M, T ) where M is a schema mapping between the structured Wiki data and DBpedia, and T is a DBpedia TBox, and the language used to formalize the TBox is the tgds language of Σ and the Wiki schema W.
Updating Wikipedia via DBpedia Mappings and SPARQL
TLDR
This paper provides a formalization of DBpedia as an Ontology-Based Data Management framework and study its computational properties, and provides a novel approach to the inherently intractable update translation problem, leveraging the pre-existent data for disambiguating updates.
Improving wikipedia-based place name disambiguation in short texts using structured data from DBpedia
TLDR
This paper presents an approach for combining Wikipedia and DBpedia to disambiguate place names in short texts, and argues that a combination of both performs better than each of them alone.
Extending DBpedia with List Structures in Wikipedia Articles
Ontologies are the basis of the Semantic Web. Owing to the cost of their construction and maintenance, however, there is much interest in automating their construction. Wikipedia is considered a
An Approach for Improving DBpedia as a Research Data Hub
TLDR
The proposal aims at consolidating DBpedia as a reference hub for research data, so that research from any area supported by the Semantic Web data can use its data reliably.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 62 REFERENCES
DBpedia: A Multilingual Cross-domain Knowledge Base
TLDR
This paper describes the general DBpedia knowledge base and as well as the DBpedia data sets that specifically aim at supporting computational linguistics tasks that include Entity Linking, Word Sense Disambiguation, Question Answering, Slot Filling and Relationship Extraction.
DBpedia Live Extraction
TLDR
DBpedia is extended with a live extraction framework, which is capable of processing tens of thousands of changes per day in order to consume the constant stream of Wikipedia updates and allows direct modifications of the knowledge base and closer interaction of users with DBpedia.
DBpedia - A crystallization point for the Web of Data
TLDR
The extraction of the DBpedia knowledge base is described, the current status of interlinking DBpedia with other data sources on the Web is discussed, and an overview of applications that facilitate the Web of Data around DBpedia is given.
DBpedia and the live extraction of structured data from Wikipedia
TLDR
DBpedia‐Live publishes the newly added/deleted triples in files, in order to enable synchronization between the DBpedia endpoint and other DBpedia mirrors.
Cross-lingual knowledge linking across wiki knowledge bases
TLDR
The problem of cross-lingual knowledge linking is studied and a linkage factor graph model is presented, showing that this approach can achieve a high precision of 85.8% with a recall of 88.1%.
Automatically refining the wikipedia infobox ontology
TLDR
KOG, an autonomous system for refining Wikipedia's infobox-class ontology, is introduced, using both SVMs and a more powerful joint-inference approach expressed in Markov Logic Networks to build a rich ontology.
Wikipedia Mining Wikipedia as a Corpus for Knowledge Extraction
Wikipedia, a collaborative Wiki-based encyclopedia, has become a huge phenomenon among Internet users. It covers a huge number of concepts of various fields such as Arts, Geography, History, Science,
Multipedia: enriching DBpedia with multimedia information
TLDR
This paper addresses the problem of how to enrich ontology instances with candidate images retrieved from existing Web search engines by tapping into the Wikipedia corpus to gather context information for DBpedia instances and takes advantage of image tagging information when this is available to calculate semantic relatedness between instances and candidate images.
Overview of the TAC 2010 Knowledge Base Population Track
TLDR
An overview of the task definition and annotation challenges associated with KBP2010 is provided and the evaluation results and lessons that are learned are discussed based on detailed analysis.
Extracting Lexical Semantic Knowledge from Wikipedia and Wiktionary
TLDR
This paper presents two application programming interfaces for Wikipedia and Wiktionary which are especially designed for mining the rich lexical semantic information dispersed in the knowledge bases, and provide efficient and structured access to the available knowledge.
...
1
2
3
4
5
...