• Publications
  • Influence
DBpedia - A crystallization point for the Web of Data
TLDR
The DBpedia project is a community effort to extract structured information from Wikipedia and to make this information accessible on the Web. Expand
DBpedia - A large-scale, multilingual knowledge base extracted from Wikipedia
TLDR
The DBpedia community project extracts structured, multilingual knowledge from Wikipedia and makes it freely available on the Web using Semantic Web and Linked Data technologies. Expand
LinkedGeoData: Adding a Spatial Dimension to the Web of Data
TLDR
We contribute to the generation of a spatial dimension for the Data Web by elaborating on how the collaboratively collected OpenStreetMap data can be transformed and represented adhering to the RDF data model. Expand
Integrating NLP Using Linked Data
TLDR
In this paper, we present the NLP Interchange Format (NIF). Expand
Triplify: light-weight linked data publication from relational databases
TLDR
In this paper we present Triplify - a simplistic but effective approach to publish Linked Data from relational databases. Expand
Test-driven evaluation of linked data quality
TLDR
We present a methodology for test-driven quality assessment of Linked Data, based on a formalization of bad smells and data quality problems. Expand
RelFinder: Revealing Relationships in RDF Knowledge Bases
TLDR
We present an approach that extracts a graph covering relationships between two objects of interest and visualizes them in a force-directed graph layout. Expand
N³ - A Collection of Datasets for Named Entity Recognition and Disambiguation in the NLP Interchange Format
TLDR
We publish three novel datasets (called N3) in which named entities have been annotated manually. Expand
Real-Time RDF Extraction from Unstructured Data Streams
TLDR
We present an approach that allows extracting RDF triples from unstructured data streams in a fashion similar to the live versions of the DBpedia1 and LinkedGeoData datasets. Expand
DBpedia Live Extraction
TLDR
We extended DBpedia with a live extraction framework, which is capable of processing tens of thousands of changes per day in order to consume the constant stream of Wikipedia updates. Expand
...
1
2
3
4
5
...